From patchwork Fri Jul 26 00:56:44 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Ralph Campbell X-Patchwork-Id: 11060093 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 34BB6138D for ; Fri, 26 Jul 2019 00:57:10 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 2589F28A38 for ; Fri, 26 Jul 2019 00:57:10 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 1A2AB28A74; Fri, 26 Jul 2019 00:57:10 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-3.0 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,MAILING_LIST_MULTI,RCVD_IN_DNSWL_NONE autolearn=ham version=3.3.1 Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id EF13528A38 for ; Fri, 26 Jul 2019 00:57:08 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id D9AF66B0005; Thu, 25 Jul 2019 20:57:02 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id D4A2E8E0003; Thu, 25 Jul 2019 20:57:02 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id C3DB28E0002; Thu, 25 Jul 2019 20:57:02 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from mail-yb1-f199.google.com (mail-yb1-f199.google.com [209.85.219.199]) by kanga.kvack.org (Postfix) with ESMTP id A39F66B0005 for ; Thu, 25 Jul 2019 20:57:02 -0400 (EDT) Received: by mail-yb1-f199.google.com with SMTP id b78so10359169ybg.20 for ; Thu, 25 Jul 2019 17:57:02 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding:dkim-signature; bh=lwkkWpTr2q6bGT3TYFre2y7WKY3WOkWdOLwJIYO0wiE=; b=JBc2cZrVgcu7HxwY95l6i8NhW/yfgjF6546NcINKD0z3IQaUFMphRMRi/sJ+sGRK8c sDkacdvbE+BBUUKYPjkC8ZsyfJtT5jyduy5b/f9D94AHFYlGlDSf3oBsUXexMSnXB+Md pX9S7Z4glKxxkhx/Iouko5n1yogokeWVo7DiJkpDkGJ0iFQDRcVhcdRE0jRynDgvNX9t p1fNSEAVPuDqGsyVBCz6SZUrt18BckAjoEZyMLXDVyE85U73CugBd+w4c0pkHHu3Uhqd v7DzTohWK7w+wHYScBIxUCE0l0xYrNq7CFtCYEv7I828JuhXzLezU/7T04fvAgjDZHW7 ZrUw== X-Gm-Message-State: APjAAAWvFMDRoFQxNFn2CvuhhSJoQddh+1Kk2JLnfmySDM7FCjfupJfN ovmoiAnnMGE0IYwlCbYjmHRCcjyvCw3FbJhBeJE1OPdc1XXBmMcgmSCcEwU5ZUypBtFgU8QVyHl ROUQLOk8psCtTHlXo2Sa/ENg4AlSFck012ZORXZYUMN9W3uaj5Ze52LHoM+Q8EGoxcw== X-Received: by 2002:a25:4214:: with SMTP id p20mr54267014yba.292.1564102622414; Thu, 25 Jul 2019 17:57:02 -0700 (PDT) X-Google-Smtp-Source: APXvYqy4bh7ueFEIuHzxe4YQkXFMGKLG0KwCItBbCSYTySBQYfev/ue7CAd9xTUJQiSVOukrs0GL X-Received: by 2002:a25:4214:: with SMTP id p20mr54266991yba.292.1564102621790; Thu, 25 Jul 2019 17:57:01 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1564102621; cv=none; d=google.com; s=arc-20160816; b=rYKYuXVvmdlqhc4WzTc0D7ndR8m+dFKAj2SQcZdKgd31jlzKAS1tPpFFlA5tc+TTGP 6VBXQAy9UZDRGWJjxxqESS4tqYb4tq1YvI3Mi2q0GrB5k4aESnYNbGwE1Uygn+NA0koH 6f6Zvh6+zsCK1TioGV8Kx6i4s2nncR0WQHhGDa5mMEawfIG7C7ALpL/EZxqlWRYb2/Mm KxTJ0SVdgw2G9VdHgpbUV+DTqn8WZcJQKbfv5zVxEIWXxW02O+DlhJLqWZ8DAWVqaPiM 6epY2Wka/fjK2mo/xN/jwaBJgii95obAsp3jopmP25NBV9BOWPFsGM9OjPJXviEuuotH TaIQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=dkim-signature:content-transfer-encoding:mime-version:references :in-reply-to:message-id:date:subject:cc:to:from; bh=lwkkWpTr2q6bGT3TYFre2y7WKY3WOkWdOLwJIYO0wiE=; b=x+xomS/uS2in1PVspfa+e+z0bYzG3x8cupLq2cyuf9s19xx3LoY7QdY9pLcFJuT3Tz hBhel54/XNUGjQsC/Bnb9HAvRDKmN9H9Mb2AASkK9+yCWUsXIGHzLbwF/VbCLhr2QR77 3eWY2CN0pXw93N2i8q6ehrasVh/deeAm43sq0Qw6aN0Az8c6P8vmb+TJf979MIFgK07C 5abggT7m0HkVKKNNC2r+MspnyOn9zBcwSDsDYkV7B7TqbmGGozAT4rSHEC/LB3ii3HNL qWPAKqR601rP6/ArNBkW4ICmDRvmV/M+1JvWF3GhMKbiZZtXVc9XCLjfNNVFImfA+feS 1TPg== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@nvidia.com header.s=n1 header.b=qHTTBk+O; spf=pass (google.com: domain of rcampbell@nvidia.com designates 216.228.121.143 as permitted sender) smtp.mailfrom=rcampbell@nvidia.com; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=nvidia.com Received: from hqemgate14.nvidia.com (hqemgate14.nvidia.com. [216.228.121.143]) by mx.google.com with ESMTPS id e68si18728471yba.98.2019.07.25.17.57.01 for (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Thu, 25 Jul 2019 17:57:01 -0700 (PDT) Received-SPF: pass (google.com: domain of rcampbell@nvidia.com designates 216.228.121.143 as permitted sender) client-ip=216.228.121.143; Authentication-Results: mx.google.com; dkim=pass header.i=@nvidia.com header.s=n1 header.b=qHTTBk+O; spf=pass (google.com: domain of rcampbell@nvidia.com designates 216.228.121.143 as permitted sender) smtp.mailfrom=rcampbell@nvidia.com; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=nvidia.com Received: from hqpgpgate101.nvidia.com (Not Verified[216.228.121.13]) by hqemgate14.nvidia.com (using TLS: TLSv1.2, DES-CBC3-SHA) id ; Thu, 25 Jul 2019 17:57:01 -0700 Received: from hqmail.nvidia.com ([172.20.161.6]) by hqpgpgate101.nvidia.com (PGP Universal service); Thu, 25 Jul 2019 17:57:00 -0700 X-PGP-Universal: processed; by hqpgpgate101.nvidia.com on Thu, 25 Jul 2019 17:57:00 -0700 Received: from HQMAIL107.nvidia.com (172.20.187.13) by HQMAIL107.nvidia.com (172.20.187.13) with Microsoft SMTP Server (TLS) id 15.0.1473.3; Fri, 26 Jul 2019 00:56:57 +0000 Received: from hqnvemgw01.nvidia.com (172.20.150.20) by HQMAIL107.nvidia.com (172.20.187.13) with Microsoft SMTP Server (TLS) id 15.0.1473.3 via Frontend Transport; Fri, 26 Jul 2019 00:56:57 +0000 Received: from rcampbell-dev.nvidia.com (Not Verified[10.110.48.66]) by hqnvemgw01.nvidia.com with Trustwave SEG (v7,5,8,10121) id ; Thu, 25 Jul 2019 17:56:56 -0700 From: Ralph Campbell To: CC: , , , , "Ralph Campbell" , =?utf-8?b?SsOpcsO0bWUgR2xpc3Nl?= , Jason Gunthorpe , Ben Skeggs Subject: [PATCH v2 1/7] mm/hmm: replace hmm_update with mmu_notifier_range Date: Thu, 25 Jul 2019 17:56:44 -0700 Message-ID: <20190726005650.2566-2-rcampbell@nvidia.com> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20190726005650.2566-1-rcampbell@nvidia.com> References: <20190726005650.2566-1-rcampbell@nvidia.com> MIME-Version: 1.0 X-NVConfidentiality: public DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=nvidia.com; s=n1; t=1564102621; bh=lwkkWpTr2q6bGT3TYFre2y7WKY3WOkWdOLwJIYO0wiE=; h=X-PGP-Universal:From:To:CC:Subject:Date:Message-ID:X-Mailer: In-Reply-To:References:MIME-Version:X-NVConfidentiality: Content-Type:Content-Transfer-Encoding; b=qHTTBk+OZ0oN3QPNauWBwLkmaGwrXZByf5lYGUIWIV0ijj6z01usz0teoi13vYJef juL0dX1HCCmKrx2gNn+RsuLecaEo9P7UvSNBrOrV/vIeDt2gwE2pYMD0h9VsExRhHH J6T8T/B1wR9BG+VRYXy5QwRHCA0/pw2vXwsY+Iz3L8QIDmvKn+y/s5QA/7aLRuZ8s1 zhb5PNY0yNkBMFLgfuYWNsx9BDGs4EcroZGPHathoVQFdmt4ObtA8BApmIMJfivowN BhGphDqaXimFbxoylUOzm2eBZBtySe0aSv1sUsPvXBhcX4rYKXwuUIWemLX7VO1gnV mNko92YqJ9Gdg== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: X-Virus-Scanned: ClamAV using ClamSMTP The hmm_mirror_ops callback function sync_cpu_device_pagetables() passes a struct hmm_update which is a simplified version of struct mmu_notifier_range. This is unnecessary so replace hmm_update with mmu_notifier_range directly. Signed-off-by: Ralph Campbell Reviewed: Christoph Hellwig Cc: "Jérôme Glisse" Cc: Jason Gunthorpe Cc: Ben Skeggs --- drivers/gpu/drm/amd/amdgpu/amdgpu_mn.c | 8 +++---- drivers/gpu/drm/nouveau/nouveau_svm.c | 4 ++-- include/linux/hmm.h | 31 ++++---------------------- mm/hmm.c | 13 ++++------- 4 files changed, 14 insertions(+), 42 deletions(-) diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_mn.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_mn.c index 3971c201f320..cf945080dff3 100644 --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_mn.c +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_mn.c @@ -196,12 +196,12 @@ static void amdgpu_mn_invalidate_node(struct amdgpu_mn_node *node, * potentially dirty. */ static int amdgpu_mn_sync_pagetables_gfx(struct hmm_mirror *mirror, - const struct hmm_update *update) + const struct mmu_notifier_range *update) { struct amdgpu_mn *amn = container_of(mirror, struct amdgpu_mn, mirror); unsigned long start = update->start; unsigned long end = update->end; - bool blockable = update->blockable; + bool blockable = mmu_notifier_range_blockable(update); struct interval_tree_node *it; /* notification is exclusive, but interval is inclusive */ @@ -244,12 +244,12 @@ static int amdgpu_mn_sync_pagetables_gfx(struct hmm_mirror *mirror, * are restorted in amdgpu_mn_invalidate_range_end_hsa. */ static int amdgpu_mn_sync_pagetables_hsa(struct hmm_mirror *mirror, - const struct hmm_update *update) + const struct mmu_notifier_range *update) { struct amdgpu_mn *amn = container_of(mirror, struct amdgpu_mn, mirror); unsigned long start = update->start; unsigned long end = update->end; - bool blockable = update->blockable; + bool blockable = mmu_notifier_range_blockable(update); struct interval_tree_node *it; /* notification is exclusive, but interval is inclusive */ diff --git a/drivers/gpu/drm/nouveau/nouveau_svm.c b/drivers/gpu/drm/nouveau/nouveau_svm.c index 545100f7c594..79b29c918717 100644 --- a/drivers/gpu/drm/nouveau/nouveau_svm.c +++ b/drivers/gpu/drm/nouveau/nouveau_svm.c @@ -252,13 +252,13 @@ nouveau_svmm_invalidate(struct nouveau_svmm *svmm, u64 start, u64 limit) static int nouveau_svmm_sync_cpu_device_pagetables(struct hmm_mirror *mirror, - const struct hmm_update *update) + const struct mmu_notifier_range *update) { struct nouveau_svmm *svmm = container_of(mirror, typeof(*svmm), mirror); unsigned long start = update->start; unsigned long limit = update->end; - if (!update->blockable) + if (!mmu_notifier_range_blockable(update)) return -EAGAIN; SVMM_DBG(svmm, "invalidate %016lx-%016lx", start, limit); diff --git a/include/linux/hmm.h b/include/linux/hmm.h index 9f32586684c9..659e25a15700 100644 --- a/include/linux/hmm.h +++ b/include/linux/hmm.h @@ -340,29 +340,6 @@ static inline uint64_t hmm_device_entry_from_pfn(const struct hmm_range *range, struct hmm_mirror; -/* - * enum hmm_update_event - type of update - * @HMM_UPDATE_INVALIDATE: invalidate range (no indication as to why) - */ -enum hmm_update_event { - HMM_UPDATE_INVALIDATE, -}; - -/* - * struct hmm_update - HMM update information for callback - * - * @start: virtual start address of the range to update - * @end: virtual end address of the range to update - * @event: event triggering the update (what is happening) - * @blockable: can the callback block/sleep ? - */ -struct hmm_update { - unsigned long start; - unsigned long end; - enum hmm_update_event event; - bool blockable; -}; - /* * struct hmm_mirror_ops - HMM mirror device operations callback * @@ -383,9 +360,9 @@ struct hmm_mirror_ops { /* sync_cpu_device_pagetables() - synchronize page tables * * @mirror: pointer to struct hmm_mirror - * @update: update information (see struct hmm_update) - * Return: -EAGAIN if update.blockable false and callback need to - * block, 0 otherwise. + * @update: update information (see struct mmu_notifier_range) + * Return: -EAGAIN if mmu_notifier_range_blockable(update) is false + * and callback needs to block, 0 otherwise. * * This callback ultimately originates from mmu_notifiers when the CPU * page table is updated. The device driver must update its page table @@ -397,7 +374,7 @@ struct hmm_mirror_ops { * synchronous call. */ int (*sync_cpu_device_pagetables)(struct hmm_mirror *mirror, - const struct hmm_update *update); + const struct mmu_notifier_range *update); }; /* diff --git a/mm/hmm.c b/mm/hmm.c index 54b3a4162ae9..4040b4427635 100644 --- a/mm/hmm.c +++ b/mm/hmm.c @@ -165,7 +165,6 @@ static int hmm_invalidate_range_start(struct mmu_notifier *mn, { struct hmm *hmm = container_of(mn, struct hmm, mmu_notifier); struct hmm_mirror *mirror; - struct hmm_update update; struct hmm_range *range; unsigned long flags; int ret = 0; @@ -173,15 +172,10 @@ static int hmm_invalidate_range_start(struct mmu_notifier *mn, if (!kref_get_unless_zero(&hmm->kref)) return 0; - update.start = nrange->start; - update.end = nrange->end; - update.event = HMM_UPDATE_INVALIDATE; - update.blockable = mmu_notifier_range_blockable(nrange); - spin_lock_irqsave(&hmm->ranges_lock, flags); hmm->notifiers++; list_for_each_entry(range, &hmm->ranges, list) { - if (update.end < range->start || update.start >= range->end) + if (nrange->end < range->start || nrange->start >= range->end) continue; range->valid = false; @@ -198,9 +192,10 @@ static int hmm_invalidate_range_start(struct mmu_notifier *mn, list_for_each_entry(mirror, &hmm->mirrors, list) { int rc; - rc = mirror->ops->sync_cpu_device_pagetables(mirror, &update); + rc = mirror->ops->sync_cpu_device_pagetables(mirror, nrange); if (rc) { - if (WARN_ON(update.blockable || rc != -EAGAIN)) + if (WARN_ON(mmu_notifier_range_blockable(nrange) || + rc != -EAGAIN)) continue; ret = -EAGAIN; break; From patchwork Fri Jul 26 00:56:45 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Ralph Campbell X-Patchwork-Id: 11060089 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 9C1CA1398 for ; Fri, 26 Jul 2019 00:57:07 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 8B07828A15 for ; Fri, 26 Jul 2019 00:57:07 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 7EB7028A6C; Fri, 26 Jul 2019 00:57:07 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-3.0 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,MAILING_LIST_MULTI,RCVD_IN_DNSWL_NONE autolearn=ham version=3.3.1 Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id B526328A15 for ; Fri, 26 Jul 2019 00:57:06 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id EADF66B0006; Thu, 25 Jul 2019 20:57:04 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id E5D726B0007; Thu, 25 Jul 2019 20:57:04 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id CD6D28E0002; Thu, 25 Jul 2019 20:57:04 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from mail-yb1-f198.google.com (mail-yb1-f198.google.com [209.85.219.198]) by kanga.kvack.org (Postfix) with ESMTP id AB1AF6B0006 for ; Thu, 25 Jul 2019 20:57:04 -0400 (EDT) Received: by mail-yb1-f198.google.com with SMTP id f1so39510167ybq.3 for ; Thu, 25 Jul 2019 17:57:04 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding:dkim-signature; bh=VqpSzYOwtgE4pudzmhAYqixR/5l+/0deY5VPhceduHU=; b=mhiIoysvovQS8PwWoUmWAkbxL8hyGNkZrbkUUyyV0SyKSmd33A5r5ub3r4U7oNcSMO G0wEyaW2XbXbuV6udeNVElICMqOVV2RNAbCNH4GUbLy1MVGg+bnDirwOr27TPWVIsONe rGvz8sf3YMtJqhl4bI7ITj8XbK0mOB0KKX1KZCiM1grASgPbeU4XG9flfXe+12kLBXZR 0ZG5/iqhSbjbI38pCYnP9D5D6oQm+yFasp884ckadq9sYETkilp527uxvIURcz9Hnhnz ZvH+NfVJ2OEc3cyE6M4lu4g4J1XKsgzshWvdAyCdD7W9nx1wAShoXB5tTKXPW34zoXaz 0+Kg== X-Gm-Message-State: APjAAAVjnE+X7RKvMUb15WfMhWzFpWbsqZ1Jpkg/K3gkNSuHLwi4CUWP wEewyqel4A/iwivCxn3htnOKLp+a4TGzsdD2u7s4eIRDn7EqpHHw1FiSxHIiivjKI1oy1SAwHwT 5HhO3vvxTXah6tMNxAgG82Kjbv3TKaDcTMg1ycaY1XDK9ymln32Hg3QnPaRbRmYRdlA== X-Received: by 2002:a81:47d5:: with SMTP id u204mr57744406ywa.145.1564102624456; Thu, 25 Jul 2019 17:57:04 -0700 (PDT) X-Google-Smtp-Source: APXvYqyAhVsIdAlKlTfEtZwmU3PYsSPvK+vzb+4mvjlRevCWjHfiORgnoQ7ePg/RHXOgORhMwT7F X-Received: by 2002:a81:47d5:: with SMTP id u204mr57744378ywa.145.1564102623800; Thu, 25 Jul 2019 17:57:03 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1564102623; cv=none; d=google.com; s=arc-20160816; b=fUjHrN+hQIX6e45EgkUNQZH770x0oC5Y2VqIdG83hLrGZ6GsMWG6DDq8xfC5ciTDbr psvO76S+IKoiVklDj0DUz1Ew+ohmAt8YxF6xp9SfXCLCWRQPQ/Nge+wL9f6d+ZsQfdXd FCGmwJ72a0bhJRdQUC+aA7eY7GVp5ujEzXz1Crr101OxITJRXMRZZqyKshSpv57EhjVL JVBwRcOQJiAXcKifY0cDnSWOlNXC6I7bVVp5p1TfYesk06fs2czgKowSQWRzZ5bNTV2N N0wFAvnVw3Evibf5oPKRJz5VLxhS51ovBKKSFkywJQbmHA8WaWcvASDLU4K+YQz6RX8/ y7IA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=dkim-signature:content-transfer-encoding:mime-version:references :in-reply-to:message-id:date:subject:cc:to:from; bh=VqpSzYOwtgE4pudzmhAYqixR/5l+/0deY5VPhceduHU=; b=P5NGRbiVNX5rwnJzcDYd6+o3lxdllA2+nwptsS1Gmi3YTfQz3U07IfaxRl6I06QR/H 8gvVFJOnK4Kq22z1N9rJqnulY4Ts091mieVMCtq9YguqZUhFgsjJOedC81bWQOGgES7R 9xaAcnDzsUYayKw3rzP6q/5R/fusZX3BupfZDOyG+3INSuMbLiO/vG9Bh2hQ812orTkQ loC0wcaqtfxt2LzUAQZS5JIHwRkZq+eeY8jJcrSY6meT+ij0YYFsMA/huS1GUD2JKaCX cibtOnVP4ejsjq2cScacqy+lXxEesindE2kIm8E6zMCqrniT5D9BexozM4evlBzUMzGD A1kA== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@nvidia.com header.s=n1 header.b=RLKAZgjM; spf=pass (google.com: domain of rcampbell@nvidia.com designates 216.228.121.65 as permitted sender) smtp.mailfrom=rcampbell@nvidia.com; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=nvidia.com Received: from hqemgate16.nvidia.com (hqemgate16.nvidia.com. [216.228.121.65]) by mx.google.com with ESMTPS id x140si20332509ybg.49.2019.07.25.17.57.03 for (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Thu, 25 Jul 2019 17:57:03 -0700 (PDT) Received-SPF: pass (google.com: domain of rcampbell@nvidia.com designates 216.228.121.65 as permitted sender) client-ip=216.228.121.65; Authentication-Results: mx.google.com; dkim=pass header.i=@nvidia.com header.s=n1 header.b=RLKAZgjM; spf=pass (google.com: domain of rcampbell@nvidia.com designates 216.228.121.65 as permitted sender) smtp.mailfrom=rcampbell@nvidia.com; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=nvidia.com Received: from hqpgpgate102.nvidia.com (Not Verified[216.228.121.13]) by hqemgate16.nvidia.com (using TLS: TLSv1.2, DES-CBC3-SHA) id ; Thu, 25 Jul 2019 17:57:00 -0700 Received: from hqmail.nvidia.com ([172.20.161.6]) by hqpgpgate102.nvidia.com (PGP Universal service); Thu, 25 Jul 2019 17:57:02 -0700 X-PGP-Universal: processed; by hqpgpgate102.nvidia.com on Thu, 25 Jul 2019 17:57:02 -0700 Received: from HQMAIL109.nvidia.com (172.20.187.15) by HQMAIL104.nvidia.com (172.18.146.11) with Microsoft SMTP Server (TLS) id 15.0.1473.3; Fri, 26 Jul 2019 00:57:00 +0000 Received: from hqnvemgw01.nvidia.com (172.20.150.20) by HQMAIL109.nvidia.com (172.20.187.15) with Microsoft SMTP Server (TLS) id 15.0.1473.3 via Frontend Transport; Fri, 26 Jul 2019 00:57:00 +0000 Received: from rcampbell-dev.nvidia.com (Not Verified[10.110.48.66]) by hqnvemgw01.nvidia.com with Trustwave SEG (v7,5,8,10121) id ; Thu, 25 Jul 2019 17:56:59 -0700 From: Ralph Campbell To: CC: , , , , "Ralph Campbell" , =?utf-8?b?SsOpcsO0bWUgR2xpc3Nl?= , Jason Gunthorpe , Christoph Hellwig Subject: [PATCH v2 2/7] mm/hmm: a few more C style and comment clean ups Date: Thu, 25 Jul 2019 17:56:45 -0700 Message-ID: <20190726005650.2566-3-rcampbell@nvidia.com> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20190726005650.2566-1-rcampbell@nvidia.com> References: <20190726005650.2566-1-rcampbell@nvidia.com> MIME-Version: 1.0 X-NVConfidentiality: public DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=nvidia.com; s=n1; t=1564102620; bh=VqpSzYOwtgE4pudzmhAYqixR/5l+/0deY5VPhceduHU=; h=X-PGP-Universal:From:To:CC:Subject:Date:Message-ID:X-Mailer: In-Reply-To:References:MIME-Version:X-NVConfidentiality: Content-Type:Content-Transfer-Encoding; b=RLKAZgjMCVM5+WePfidI0tls0zB8FdOzcWvS7XIv6zYBkht1s5ZCZZdcsrMNWS+7x 4dqXVJDdDuqtpVUW1nlvNgEHR8a5xoherwFLBNneBx22XscZ7MovvWr2MgM885CcyJ S/OXldZM6BH7Z8bd1bhJqH1xIkKjB/qI2Y3Mf5xFu1C/iWWqFSAJHFddg/WDMTQY0k VKQEdSPSjvz72QQc10TOhMECYbKbvAjnDh/Geufbo18NOWf0Ep/Yl6uEz4Cz0kscv5 PZ55TStFzJZJtxhGPOTLCHyYDoiEzjrE/EYAC2Q+ejsNgKKmz+NjSzyNomVXHPeADy OhUpiDUNazCzA== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: X-Virus-Scanned: ClamAV using ClamSMTP A few more comments and minor programming style clean ups. There should be no functional changes. Signed-off-by: Ralph Campbell Cc: "Jérôme Glisse" Cc: Jason Gunthorpe Cc: Christoph Hellwig Reviewed-by: Christoph Hellwig --- mm/hmm.c | 39 +++++++++++++++++---------------------- 1 file changed, 17 insertions(+), 22 deletions(-) diff --git a/mm/hmm.c b/mm/hmm.c index 4040b4427635..362944b0fbca 100644 --- a/mm/hmm.c +++ b/mm/hmm.c @@ -32,7 +32,7 @@ static const struct mmu_notifier_ops hmm_mmu_notifier_ops; * hmm_get_or_create - register HMM against an mm (HMM internal) * * @mm: mm struct to attach to - * Returns: returns an HMM object, either by referencing the existing + * Return: an HMM object, either by referencing the existing * (per-process) object, or by creating a new one. * * This is not intended to be used directly by device drivers. If mm already @@ -325,8 +325,8 @@ static int hmm_pfns_bad(unsigned long addr, } /* - * hmm_vma_walk_hole() - handle a range lacking valid pmd or pte(s) - * @start: range virtual start address (inclusive) + * hmm_vma_walk_hole_() - handle a range lacking valid pmd or pte(s) + * @addr: range virtual start address (inclusive) * @end: range virtual end address (exclusive) * @fault: should we fault or not ? * @write_fault: write fault ? @@ -376,9 +376,9 @@ static inline void hmm_pte_need_fault(const struct hmm_vma_walk *hmm_vma_walk, /* * So we not only consider the individual per page request we also * consider the default flags requested for the range. The API can - * be use in 2 fashions. The first one where the HMM user coalesce - * multiple page fault into one request and set flags per pfns for - * of those faults. The second one where the HMM user want to pre- + * be used 2 ways. The first one where the HMM user coalesces + * multiple page faults into one request and sets flags per pfn for + * those faults. The second one where the HMM user wants to pre- * fault a range with specific flags. For the latter one it is a * waste to have the user pre-fill the pfn arrays with a default * flags value. @@ -388,7 +388,7 @@ static inline void hmm_pte_need_fault(const struct hmm_vma_walk *hmm_vma_walk, /* We aren't ask to do anything ... */ if (!(pfns & range->flags[HMM_PFN_VALID])) return; - /* If this is device memory than only fault if explicitly requested */ + /* If this is device memory then only fault if explicitly requested */ if ((cpu_flags & range->flags[HMM_PFN_DEVICE_PRIVATE])) { /* Do we fault on device memory ? */ if (pfns & range->flags[HMM_PFN_DEVICE_PRIVATE]) { @@ -502,7 +502,7 @@ static int hmm_vma_handle_pmd(struct mm_walk *walk, hmm_vma_walk->last = end; return 0; #else - /* If THP is not enabled then we should never reach that code ! */ + /* If THP is not enabled then we should never reach this code ! */ return -EINVAL; #endif } @@ -522,7 +522,6 @@ static int hmm_vma_handle_pte(struct mm_walk *walk, unsigned long addr, { struct hmm_vma_walk *hmm_vma_walk = walk->private; struct hmm_range *range = hmm_vma_walk->range; - struct vm_area_struct *vma = walk->vma; bool fault, write_fault; uint64_t cpu_flags; pte_t pte = *ptep; @@ -571,8 +570,7 @@ static int hmm_vma_handle_pte(struct mm_walk *walk, unsigned long addr, if (fault || write_fault) { pte_unmap(ptep); hmm_vma_walk->last = addr; - migration_entry_wait(vma->vm_mm, - pmdp, addr); + migration_entry_wait(walk->mm, pmdp, addr); return -EBUSY; } return 0; @@ -620,13 +618,11 @@ static int hmm_vma_walk_pmd(pmd_t *pmdp, { struct hmm_vma_walk *hmm_vma_walk = walk->private; struct hmm_range *range = hmm_vma_walk->range; - struct vm_area_struct *vma = walk->vma; uint64_t *pfns = range->pfns; unsigned long addr = start, i; pte_t *ptep; pmd_t pmd; - again: pmd = READ_ONCE(*pmdp); if (pmd_none(pmd)) @@ -648,7 +644,7 @@ static int hmm_vma_walk_pmd(pmd_t *pmdp, 0, &fault, &write_fault); if (fault || write_fault) { hmm_vma_walk->last = addr; - pmd_migration_entry_wait(vma->vm_mm, pmdp); + pmd_migration_entry_wait(walk->mm, pmdp); return -EBUSY; } return 0; @@ -657,11 +653,11 @@ static int hmm_vma_walk_pmd(pmd_t *pmdp, if (pmd_devmap(pmd) || pmd_trans_huge(pmd)) { /* - * No need to take pmd_lock here, even if some other threads + * No need to take pmd_lock here, even if some other thread * is splitting the huge pmd we will get that event through * mmu_notifier callback. * - * So just read pmd value and check again its a transparent + * So just read pmd value and check again it's a transparent * huge or device mapping one and compute corresponding pfn * values. */ @@ -675,7 +671,7 @@ static int hmm_vma_walk_pmd(pmd_t *pmdp, } /* - * We have handled all the valid case above ie either none, migration, + * We have handled all the valid cases above ie either none, migration, * huge or transparent huge. At this point either it is a valid pmd * entry pointing to pte directory or it is a bad pmd that will not * recover. @@ -795,10 +791,10 @@ static int hmm_vma_walk_hugetlb_entry(pte_t *pte, unsigned long hmask, pte_t entry; int ret = 0; - size = 1UL << huge_page_shift(h); + size = huge_page_size(h); mask = size - 1; if (range->page_shift != PAGE_SHIFT) { - /* Make sure we are looking at full page. */ + /* Make sure we are looking at a full page. */ if (start & mask) return -EINVAL; if (end < (start + size)) @@ -809,8 +805,7 @@ static int hmm_vma_walk_hugetlb_entry(pte_t *pte, unsigned long hmask, size = PAGE_SIZE; } - - ptl = huge_pte_lock(hstate_vma(walk->vma), walk->mm, pte); + ptl = huge_pte_lock(hstate_vma(vma), walk->mm, pte); entry = huge_ptep_get(pte); i = (start - range->start) >> range->page_shift; @@ -859,7 +854,7 @@ static void hmm_pfns_clear(struct hmm_range *range, * @start: start virtual address (inclusive) * @end: end virtual address (exclusive) * @page_shift: expect page shift for the range - * Returns 0 on success, -EFAULT if the address space is no longer valid + * Return: 0 on success, -EFAULT if the address space is no longer valid * * Track updates to the CPU page table see include/linux/hmm.h */ From patchwork Fri Jul 26 00:56:46 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Ralph Campbell X-Patchwork-Id: 11060095 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 5911F17EF for ; Fri, 26 Jul 2019 00:57:10 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 4841528A15 for ; Fri, 26 Jul 2019 00:57:10 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 3C93B28A38; Fri, 26 Jul 2019 00:57:10 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-3.0 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,MAILING_LIST_MULTI,RCVD_IN_DNSWL_NONE autolearn=ham version=3.3.1 Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 41AA028A6C for ; Fri, 26 Jul 2019 00:57:09 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 142CB6B0007; Thu, 25 Jul 2019 20:57:07 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 07B3F8E0002; Thu, 25 Jul 2019 20:57:07 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id DC11E6B000A; Thu, 25 Jul 2019 20:57:06 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from mail-yw1-f70.google.com (mail-yw1-f70.google.com [209.85.161.70]) by kanga.kvack.org (Postfix) with ESMTP id B70A86B0007 for ; Thu, 25 Jul 2019 20:57:06 -0400 (EDT) Received: by mail-yw1-f70.google.com with SMTP id x20so38365063ywg.23 for ; Thu, 25 Jul 2019 17:57:06 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding:dkim-signature; bh=BVcR9FOkcQWKlh12sJ9h4SDAhWYtefde9yC9fidSqE0=; b=XY3KGKW4nJAlix6MbYdwhBVfuQri625Cw6LtYTP+k4+jztvONTluzojKwrhn6lqWQf yMe7X0atxgEewbMYW7dsHJDcDZjRYl+py7ZZbnbwi5yffq4ecbJ2Hzvq1iOYXSGbGoX5 2W6s7CyQDX6X4mCqYpWFcjPKHDMQvqZ9q6HIEnsagbmtQHRfz57Syhd0Frtq92vmBCjW uisjW3jowThj4L/H5p7QtBwNia4Z3e9UbcblfqAF5cnJwC5DuOqoK8ZN5gnuQqDn2pjP ig2iIxMzst78F2wBvFvgAVz61Y7qaCFKyvq+Zltn/J52aNp6wB/KnBN3RrAMR8vD3jKt wevg== X-Gm-Message-State: APjAAAWYHhYGNfHeMRJLCxXvCzcZ48g6s501fhVRSAfFefWtiOGd9vIr lqkpgJvmy3atG4jAQgwaPTt81VYEQlUj01YD+gyN5rG0jWQc2e7O5ofw4uW+C1TRpHAASQzvEtT rsMe8QeCwkZNRD4MDC1sgB9vU/MFan09sfMX7xd6e8zteHOv/DUdUb70V0zm+sw9Diw== X-Received: by 2002:a81:6d08:: with SMTP id i8mr55131595ywc.257.1564102626468; Thu, 25 Jul 2019 17:57:06 -0700 (PDT) X-Google-Smtp-Source: APXvYqztQ0awR/2WV59cyKOoxPuZ19ZNs6MRRFGZWgWDzN+XIYi9rAg2xjVVyds1Wo/mdeyXUDel X-Received: by 2002:a81:6d08:: with SMTP id i8mr55131571ywc.257.1564102625469; Thu, 25 Jul 2019 17:57:05 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1564102625; cv=none; d=google.com; s=arc-20160816; b=B1qgKoNgeREZGT9uQvufq083YIU8Lo7QNtaDwFMC+JPEKLO+zO+GAD7e+1fixy82br WKZCwFR2CagFBxKaPJShUBKo4EstZ0TyFzJnp514Z4r3IZrZHF0OJQbUd9xxvsGegcs9 qRL56i7MsW70jy5ctC/k8y6nXvC4VWzJX/0RNNFfrH57GaanK7XgsvtL+2ptWRz4djcx 3ybXHfY20oxKm+FS2lFEATtJSJ/ar8FuimNUn26OWlufRQyypT9simvONHne91ryGuzz FCist/e4FAzDRgRIssWKr6FAu9Ev+ztasLtQui7BJj37uWiFr87iP5BK4FWYg+HXHao4 R/mw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=dkim-signature:content-transfer-encoding:mime-version:references :in-reply-to:message-id:date:subject:cc:to:from; bh=BVcR9FOkcQWKlh12sJ9h4SDAhWYtefde9yC9fidSqE0=; b=UVj1UvsjfePWZrcm9oJLoiaRl5eb6hxiJit8HqYiCJJHZF6fcTIUphEPMKZHya5GeR TC7Uq0JkSj4JhRq3CzGFsYxDV7YY6x8E+k881u5htMK5bPeGA4MjWPmU+UfhK+TdFRlU hChB/tKpOMHlAxfMUSkDALAZ2auHSTCAcq1mMWwQHw2BonW/E8X6x5GxDxgTXqWGF6Xk iXhJwlvTnjvJRPfo2L7Vs8YXH7nH/sLXutvaaCyGgT7yjajHJ6mBZ4XNM/JYBkOr7+7n MuqcGSnY3mxkWzMCcvWhsjtpdjEEn0c2UPDJxTKVZV4KSafz+ftUfEPbD2XEDchvebHv PLLg== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@nvidia.com header.s=n1 header.b=rXZjgdHY; spf=pass (google.com: domain of rcampbell@nvidia.com designates 216.228.121.143 as permitted sender) smtp.mailfrom=rcampbell@nvidia.com; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=nvidia.com Received: from hqemgate14.nvidia.com (hqemgate14.nvidia.com. [216.228.121.143]) by mx.google.com with ESMTPS id u12si18255032ywe.187.2019.07.25.17.57.05 for (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Thu, 25 Jul 2019 17:57:05 -0700 (PDT) Received-SPF: pass (google.com: domain of rcampbell@nvidia.com designates 216.228.121.143 as permitted sender) client-ip=216.228.121.143; Authentication-Results: mx.google.com; dkim=pass header.i=@nvidia.com header.s=n1 header.b=rXZjgdHY; spf=pass (google.com: domain of rcampbell@nvidia.com designates 216.228.121.143 as permitted sender) smtp.mailfrom=rcampbell@nvidia.com; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=nvidia.com Received: from hqpgpgate101.nvidia.com (Not Verified[216.228.121.13]) by hqemgate14.nvidia.com (using TLS: TLSv1.2, DES-CBC3-SHA) id ; Thu, 25 Jul 2019 17:57:05 -0700 Received: from hqmail.nvidia.com ([172.20.161.6]) by hqpgpgate101.nvidia.com (PGP Universal service); Thu, 25 Jul 2019 17:57:04 -0700 X-PGP-Universal: processed; by hqpgpgate101.nvidia.com on Thu, 25 Jul 2019 17:57:04 -0700 Received: from HQMAIL109.nvidia.com (172.20.187.15) by HQMAIL105.nvidia.com (172.20.187.12) with Microsoft SMTP Server (TLS) id 15.0.1473.3; Fri, 26 Jul 2019 00:57:04 +0000 Received: from HQMAIL107.nvidia.com (172.20.187.13) by HQMAIL109.nvidia.com (172.20.187.15) with Microsoft SMTP Server (TLS) id 15.0.1473.3; Fri, 26 Jul 2019 00:57:02 +0000 Received: from hqnvemgw01.nvidia.com (172.20.150.20) by HQMAIL107.nvidia.com (172.20.187.13) with Microsoft SMTP Server (TLS) id 15.0.1473.3 via Frontend Transport; Fri, 26 Jul 2019 00:57:02 +0000 Received: from rcampbell-dev.nvidia.com (Not Verified[10.110.48.66]) by hqnvemgw01.nvidia.com with Trustwave SEG (v7,5,8,10121) id ; Thu, 25 Jul 2019 17:57:01 -0700 From: Ralph Campbell To: CC: , , , , "Christoph Hellwig" , Ralph Campbell , =?utf-8?b?SsOpcsO0bWUgR2xpc3Nl?= , Jason Gunthorpe Subject: [PATCH v2 3/7] mm/hmm: replace the block argument to hmm_range_fault with a flags value Date: Thu, 25 Jul 2019 17:56:46 -0700 Message-ID: <20190726005650.2566-4-rcampbell@nvidia.com> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20190726005650.2566-1-rcampbell@nvidia.com> References: <20190726005650.2566-1-rcampbell@nvidia.com> MIME-Version: 1.0 X-NVConfidentiality: public DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=nvidia.com; s=n1; t=1564102625; bh=BVcR9FOkcQWKlh12sJ9h4SDAhWYtefde9yC9fidSqE0=; h=X-PGP-Universal:From:To:CC:Subject:Date:Message-ID:X-Mailer: In-Reply-To:References:MIME-Version:X-NVConfidentiality: Content-Type:Content-Transfer-Encoding; b=rXZjgdHY8rHwjV8iDsfZ2U8drIU14TZmcOsA6OvqghZ46qvpINYq9DOZbmsybzGQg dBxnKm1+4HEQE0psP+UP69x7VkDOVUnE2sBy5cfP6GiNRwAqPdZ5IjVgIL83Uvscrk wpfHV1Ecfx9gfcjYyWh9k/IlmM3t0MHGLZLOAPczL/1KnkieDvgflWD+WOBb4DElgK ir+IydhQ7eANkDONcV2jAiUC8aSwkxhZZfSbY1mDYBRdti04SWSlnS7/Bos5gqsmSj vefJoL3PtYtPNoRlLLcn8ipSzfzJzwp3dD2O0Ws2DCz4EWPhPHGAPUwi1gUW8nmjex 0tVy0PRdT8nwg== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: X-Virus-Scanned: ClamAV using ClamSMTP From: Christoph Hellwig This allows easier expansion to other flags, and also makes the callers a little easier to read. Signed-off-by: Christoph Hellwig Signed-off-by: Ralph Campbell Cc: "Jérôme Glisse" Cc: Jason Gunthorpe --- drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c | 2 +- drivers/gpu/drm/nouveau/nouveau_svm.c | 2 +- include/linux/hmm.h | 11 +++- mm/hmm.c | 74 ++++++++++++------------- 4 files changed, 48 insertions(+), 41 deletions(-) diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c index e51b48ac48eb..12a59ac83f72 100644 --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c @@ -832,7 +832,7 @@ int amdgpu_ttm_tt_get_user_pages(struct amdgpu_bo *bo, struct page **pages) down_read(&mm->mmap_sem); - r = hmm_range_fault(range, true); + r = hmm_range_fault(range, 0); if (unlikely(r < 0)) { if (likely(r == -EAGAIN)) { /* diff --git a/drivers/gpu/drm/nouveau/nouveau_svm.c b/drivers/gpu/drm/nouveau/nouveau_svm.c index 79b29c918717..49b520c60fc5 100644 --- a/drivers/gpu/drm/nouveau/nouveau_svm.c +++ b/drivers/gpu/drm/nouveau/nouveau_svm.c @@ -505,7 +505,7 @@ nouveau_range_fault(struct hmm_mirror *mirror, struct hmm_range *range) return -EBUSY; } - ret = hmm_range_fault(range, true); + ret = hmm_range_fault(range, 0); if (ret <= 0) { if (ret == 0) ret = -EBUSY; diff --git a/include/linux/hmm.h b/include/linux/hmm.h index 659e25a15700..15f1b113be3c 100644 --- a/include/linux/hmm.h +++ b/include/linux/hmm.h @@ -406,12 +406,19 @@ int hmm_range_register(struct hmm_range *range, unsigned long end, unsigned page_shift); void hmm_range_unregister(struct hmm_range *range); + +/* + * Retry fault if non-blocking, drop mmap_sem and return -EAGAIN in that case. + */ +#define HMM_FAULT_ALLOW_RETRY (1 << 0) + long hmm_range_snapshot(struct hmm_range *range); -long hmm_range_fault(struct hmm_range *range, bool block); +long hmm_range_fault(struct hmm_range *range, unsigned int flags); + long hmm_range_dma_map(struct hmm_range *range, struct device *device, dma_addr_t *daddrs, - bool block); + unsigned int flags); long hmm_range_dma_unmap(struct hmm_range *range, struct vm_area_struct *vma, struct device *device, diff --git a/mm/hmm.c b/mm/hmm.c index 362944b0fbca..84f2791d3510 100644 --- a/mm/hmm.c +++ b/mm/hmm.c @@ -281,7 +281,7 @@ struct hmm_vma_walk { struct dev_pagemap *pgmap; unsigned long last; bool fault; - bool block; + unsigned int flags; }; static int hmm_vma_do_fault(struct mm_walk *walk, unsigned long addr, @@ -293,8 +293,11 @@ static int hmm_vma_do_fault(struct mm_walk *walk, unsigned long addr, struct vm_area_struct *vma = walk->vma; vm_fault_t ret; - flags |= hmm_vma_walk->block ? 0 : FAULT_FLAG_ALLOW_RETRY; - flags |= write_fault ? FAULT_FLAG_WRITE : 0; + if (hmm_vma_walk->flags & HMM_FAULT_ALLOW_RETRY) + flags |= FAULT_FLAG_ALLOW_RETRY; + if (write_fault) + flags |= FAULT_FLAG_WRITE; + ret = handle_mm_fault(vma, addr, flags); if (ret & VM_FAULT_RETRY) { /* Note, handle_mm_fault did up_read(&mm->mmap_sem)) */ @@ -1012,26 +1015,26 @@ long hmm_range_snapshot(struct hmm_range *range) } EXPORT_SYMBOL(hmm_range_snapshot); -/* - * hmm_range_fault() - try to fault some address in a virtual address range - * @range: range being faulted - * @block: allow blocking on fault (if true it sleeps and do not drop mmap_sem) - * Return: number of valid pages in range->pfns[] (from range start - * address). This may be zero. If the return value is negative, - * then one of the following values may be returned: +/** + * hmm_range_fault - try to fault some address in a virtual address range + * @range: range being faulted + * @flags: HMM_FAULT_* flags * - * -EINVAL invalid arguments or mm or virtual address are in an - * invalid vma (for instance device file vma). - * -ENOMEM: Out of memory. - * -EPERM: Invalid permission (for instance asking for write and - * range is read only). - * -EAGAIN: If you need to retry and mmap_sem was drop. This can only - * happens if block argument is false. - * -EBUSY: If the the range is being invalidated and you should wait - * for invalidation to finish. - * -EFAULT: Invalid (ie either no valid vma or it is illegal to access - * that range), number of valid pages in range->pfns[] (from - * range start address). + * Return: the number of valid pages in range->pfns[] (from range start + * address), which may be zero. On error one of the following status codes + * can be returned: + * + * -EINVAL: Invalid arguments or mm or virtual address is in an invalid vma + * (e.g., device file vma). + * -ENOMEM: Out of memory. + * -EPERM: Invalid permission (e.g., asking for write and range is read + * only). + * -EAGAIN: A page fault needs to be retried and mmap_sem was dropped. + * -EBUSY: The range has been invalidated and the caller needs to wait for + * the invalidation to finish. + * -EFAULT: Invalid (i.e., either no valid vma or it is illegal to access + * that range) number of valid pages in range->pfns[] (from + * range start address). * * This is similar to a regular CPU page fault except that it will not trigger * any memory migration if the memory being faulted is not accessible by CPUs @@ -1040,7 +1043,7 @@ EXPORT_SYMBOL(hmm_range_snapshot); * On error, for one virtual address in the range, the function will mark the * corresponding HMM pfn entry with an error flag. */ -long hmm_range_fault(struct hmm_range *range, bool block) +long hmm_range_fault(struct hmm_range *range, unsigned int flags) { const unsigned long device_vma = VM_IO | VM_PFNMAP | VM_MIXEDMAP; unsigned long start = range->start, end; @@ -1086,7 +1089,7 @@ long hmm_range_fault(struct hmm_range *range, bool block) hmm_vma_walk.pgmap = NULL; hmm_vma_walk.last = start; hmm_vma_walk.fault = true; - hmm_vma_walk.block = block; + hmm_vma_walk.flags = flags; hmm_vma_walk.range = range; mm_walk.private = &hmm_vma_walk; end = min(range->end, vma->vm_end); @@ -1125,25 +1128,22 @@ long hmm_range_fault(struct hmm_range *range, bool block) EXPORT_SYMBOL(hmm_range_fault); /** - * hmm_range_dma_map() - hmm_range_fault() and dma map page all in one. - * @range: range being faulted - * @device: device against to dma map page to - * @daddrs: dma address of mapped pages - * @block: allow blocking on fault (if true it sleeps and do not drop mmap_sem) - * Return: number of pages mapped on success, -EAGAIN if mmap_sem have been - * drop and you need to try again, some other error value otherwise + * hmm_range_dma_map - hmm_range_fault() and dma map page all in one. + * @range: range being faulted + * @device: device to map page to + * @daddrs: array of dma addresses for the mapped pages + * @flags: HMM_FAULT_* * - * Note same usage pattern as hmm_range_fault(). + * Return: the number of pages mapped on success (including zero), or any + * status return from hmm_range_fault() otherwise. */ -long hmm_range_dma_map(struct hmm_range *range, - struct device *device, - dma_addr_t *daddrs, - bool block) +long hmm_range_dma_map(struct hmm_range *range, struct device *device, + dma_addr_t *daddrs, unsigned int flags) { unsigned long i, npages, mapped; long ret; - ret = hmm_range_fault(range, block); + ret = hmm_range_fault(range, flags); if (ret <= 0) return ret ? ret : -EBUSY; From patchwork Fri Jul 26 00:56:47 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Ralph Campbell X-Patchwork-Id: 11060099 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id AB1F0138D for ; Fri, 26 Jul 2019 00:57:16 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 9B88528A15 for ; Fri, 26 Jul 2019 00:57:16 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 9006228A74; Fri, 26 Jul 2019 00:57:16 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-3.0 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,MAILING_LIST_MULTI,RCVD_IN_DNSWL_NONE autolearn=ham version=3.3.1 Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id ACFD728A15 for ; Fri, 26 Jul 2019 00:57:11 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 18DFC6B000A; Thu, 25 Jul 2019 20:57:09 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 142168E0003; Thu, 25 Jul 2019 20:57:09 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id E38548E0002; Thu, 25 Jul 2019 20:57:08 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from mail-yb1-f200.google.com (mail-yb1-f200.google.com [209.85.219.200]) by kanga.kvack.org (Postfix) with ESMTP id BD3E06B000A for ; Thu, 25 Jul 2019 20:57:08 -0400 (EDT) Received: by mail-yb1-f200.google.com with SMTP id h67so39433708ybg.22 for ; Thu, 25 Jul 2019 17:57:08 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding:dkim-signature; bh=ucErf2NgykfAIu7RRloJG8d1JIV1vHLQef+I+l6mEkc=; b=MRappu5Q1QuchV8s/fQenb6QWT2/TenItzowUvabLfnvNIu66nxzwUnRvKjuQGAGQ3 xAjJaXQZ8Gbwr9k5SQGs5N57HzI/waOJbNYPShH9n/J00T66Jl3cIk0kAAfHnEd1bwmu JddtgjNIW6WNZWP/F9C3TkFVCTCxFYqwwsePnChoT1yh22z1OZZg3AkQHYO6l3ftP7fY 9oqswtwJrScmOjy4UKcn6tCBh4N8B8OCjY1p0mmgZsLtGrwxlnYh0rV6Q9cs8YRDXLfV 3MLtiSAzXjuhoF+GNZsJmhHNEg1nx5ryFQ/1Gm3CA48O2nPVEe6l/iGh3sR3pFDsnbHm 6/AQ== X-Gm-Message-State: APjAAAU+CTpNFpnVDy6O+t1YA9kDEQfXfCO7PSH4wcidJyYUiDY7P8Qp z+ALieKUa/JZejWx0oeR2y22pAfgVBCBro9sXVne9A50gP5DDDPr3xk582jXSKahvth+X/1uLmR xxPKvHL5KO/Duj5U6Ix37B6G3hjQjlzy01SMzoA4ORyTyG0A4KrTj8NzJXKL6tTDLyQ== X-Received: by 2002:a81:8187:: with SMTP id r129mr53952031ywf.309.1564102628500; Thu, 25 Jul 2019 17:57:08 -0700 (PDT) X-Google-Smtp-Source: APXvYqzDg2ZG4IxYcRX37JdSFLxqCtzuolw0uz6xaQ3q9LNfGQJUBoNLvTdmi5woxO/Ik9hCOhdi X-Received: by 2002:a81:8187:: with SMTP id r129mr53952016ywf.309.1564102627572; Thu, 25 Jul 2019 17:57:07 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1564102627; cv=none; d=google.com; s=arc-20160816; b=U00z2HQOpgxcQ18mIht9LTvU1trngfOQLFJzGPbOLFNAtXSj4cMqf841Qn7kgHp5Dp 2d0XtUi8mLfiWwdbPNcm6RWP1fPZ1RKgPTrSe2LDPYobvgr6V93reauBmzPzO1znXFi1 7i3at8YpI61ehpZGj8J4XwN8eLUEn2ycm4VDvrhmDigFEpHD9ng5klkmPLD/AMMocQHF 6sDcG4e437OxRWDiOjh6uk0suAiRSi0fMO7wgpQNAnZTAL1/3+h6DCK70UliWIAV1eEr nkD+wgNZQ//xoYatkMJ2YgVhxtR+qQksreoUZHVR74if6sFaLB8tM1TtzTleJnHY+eOr teqw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=dkim-signature:content-transfer-encoding:mime-version:references :in-reply-to:message-id:date:subject:cc:to:from; bh=ucErf2NgykfAIu7RRloJG8d1JIV1vHLQef+I+l6mEkc=; b=zhss6cFw2JWfSl/pfaj8h6fxG6lp0O8Utq7se7b1PCYc67laz3I2v4RvStF37DL356 VWtV6uV6Xg2NEsn2uUcWpHfFT0h86R7UTLG+7ZNTjHE3TYlnjIyfwNCcCEBMYEIE49+u rN6AZMdVWkBKxhOAoTjN64GXcVsXc/2qEEbZNDq3Y0KE1PXtq3YdVF/TDUR4cQobh2wv Xlh9e8Moh89pyI+DgrO0NB1Pn5mIGEviMbWP966DnocY1N2O4kxL5QpEomkhtpTMj9Cc ITLZuthQuhiBCEIbQt4bvSpDYZcO5UEXZSW9jfEkIJYyOmTJehssqIGvpoxlURTSF08r ehbA== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@nvidia.com header.s=n1 header.b=lQZvtydN; spf=pass (google.com: domain of rcampbell@nvidia.com designates 216.228.121.64 as permitted sender) smtp.mailfrom=rcampbell@nvidia.com; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=nvidia.com Received: from hqemgate15.nvidia.com (hqemgate15.nvidia.com. [216.228.121.64]) by mx.google.com with ESMTPS id 136si18758869ywx.160.2019.07.25.17.57.07 for (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Thu, 25 Jul 2019 17:57:07 -0700 (PDT) Received-SPF: pass (google.com: domain of rcampbell@nvidia.com designates 216.228.121.64 as permitted sender) client-ip=216.228.121.64; Authentication-Results: mx.google.com; dkim=pass header.i=@nvidia.com header.s=n1 header.b=lQZvtydN; spf=pass (google.com: domain of rcampbell@nvidia.com designates 216.228.121.64 as permitted sender) smtp.mailfrom=rcampbell@nvidia.com; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=nvidia.com Received: from hqpgpgate101.nvidia.com (Not Verified[216.228.121.13]) by hqemgate15.nvidia.com (using TLS: TLSv1.2, DES-CBC3-SHA) id ; Thu, 25 Jul 2019 17:57:14 -0700 Received: from hqmail.nvidia.com ([172.20.161.6]) by hqpgpgate101.nvidia.com (PGP Universal service); Thu, 25 Jul 2019 17:57:06 -0700 X-PGP-Universal: processed; by hqpgpgate101.nvidia.com on Thu, 25 Jul 2019 17:57:06 -0700 Received: from HQMAIL105.nvidia.com (172.20.187.12) by HQMAIL101.nvidia.com (172.20.187.10) with Microsoft SMTP Server (TLS) id 15.0.1473.3; Fri, 26 Jul 2019 00:57:04 +0000 Received: from hqnvemgw01.nvidia.com (172.20.150.20) by HQMAIL105.nvidia.com (172.20.187.12) with Microsoft SMTP Server (TLS) id 15.0.1473.3 via Frontend Transport; Fri, 26 Jul 2019 00:57:03 +0000 Received: from rcampbell-dev.nvidia.com (Not Verified[10.110.48.66]) by hqnvemgw01.nvidia.com with Trustwave SEG (v7,5,8,10121) id ; Thu, 25 Jul 2019 17:57:03 -0700 From: Ralph Campbell To: CC: , , , , "Christoph Hellwig" , Ralph Campbell , =?utf-8?b?SsOpcsO0bWUgR2xpc3Nl?= , Jason Gunthorpe Subject: [PATCH v2 4/7] mm: merge hmm_range_snapshot into hmm_range_fault Date: Thu, 25 Jul 2019 17:56:47 -0700 Message-ID: <20190726005650.2566-5-rcampbell@nvidia.com> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20190726005650.2566-1-rcampbell@nvidia.com> References: <20190726005650.2566-1-rcampbell@nvidia.com> MIME-Version: 1.0 X-NVConfidentiality: public DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=nvidia.com; s=n1; t=1564102634; bh=ucErf2NgykfAIu7RRloJG8d1JIV1vHLQef+I+l6mEkc=; h=X-PGP-Universal:From:To:CC:Subject:Date:Message-ID:X-Mailer: In-Reply-To:References:MIME-Version:X-NVConfidentiality: Content-Type:Content-Transfer-Encoding; b=lQZvtydNiQ/49KQg58Da0PcYWPRDnlZegDqjgfVgmt3uTKcedgTkAIGbMli2vDbMD kDYa31DvyZNMh8dhuJ2IWfqbIOHBacsH8fBbU16atj2a9BdYcHSnXGYpb80sGZauso kfMsmb9lGwi2UoIb1cfA0skaNwNkey7+i7sOSA+oCRseHJn5wPZYyRCW1OssaSpbaH n5XiHMrMzFsCN8ZmF602lHwhJBsu0Vm6v3pqDkmjsBerRjL6c7s/OKsS+8ATwd0d1A o808TFkEvOypT/yIXFhXFaus06mg5T0Q0CTl+79AxJn20SnOToOr7oAV7jzTnd3Zac 0Sj7FQ1ciSZPQ== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: X-Virus-Scanned: ClamAV using ClamSMTP From: Christoph Hellwig Add a HMM_FAULT_SNAPSHOT flag so that hmm_range_snapshot can be merged into the almost identical hmm_range_fault function. Signed-off-by: Christoph Hellwig Signed-off-by: Ralph Campbell Cc: "Jérôme Glisse" Cc: Jason Gunthorpe --- Documentation/vm/hmm.rst | 17 ++++---- include/linux/hmm.h | 4 +- mm/hmm.c | 85 +--------------------------------------- 3 files changed, 13 insertions(+), 93 deletions(-) diff --git a/Documentation/vm/hmm.rst b/Documentation/vm/hmm.rst index 710ce1c701bf..ddcb5ca8b296 100644 --- a/Documentation/vm/hmm.rst +++ b/Documentation/vm/hmm.rst @@ -192,15 +192,14 @@ read only, or fully unmap, etc.). The device must complete the update before the driver callback returns. When the device driver wants to populate a range of virtual addresses, it can -use either:: +use:: - long hmm_range_snapshot(struct hmm_range *range); - long hmm_range_fault(struct hmm_range *range, bool block); + long hmm_range_fault(struct hmm_range *range, unsigned int flags); -The first one (hmm_range_snapshot()) will only fetch present CPU page table +With the HMM_RANGE_SNAPSHOT flag, it will only fetch present CPU page table entries and will not trigger a page fault on missing or non-present entries. -The second one does trigger a page fault on missing or read-only entries if -write access is requested (see below). Page faults use the generic mm page +Without that flag, it does trigger a page fault on missing or read-only entries +if write access is requested (see below). Page faults use the generic mm page fault code path just like a CPU page fault. Both functions copy CPU page table entries into their pfns array argument. Each @@ -227,20 +226,20 @@ The usage pattern is:: /* * Just wait for range to be valid, safe to ignore return value as we - * will use the return value of hmm_range_snapshot() below under the + * will use the return value of hmm_range_fault() below under the * mmap_sem to ascertain the validity of the range. */ hmm_range_wait_until_valid(&range, TIMEOUT_IN_MSEC); again: down_read(&mm->mmap_sem); - ret = hmm_range_snapshot(&range); + ret = hmm_range_fault(&range, HMM_RANGE_SNAPSHOT); if (ret) { up_read(&mm->mmap_sem); if (ret == -EBUSY) { /* * No need to check hmm_range_wait_until_valid() return value - * on retry we will get proper error with hmm_range_snapshot() + * on retry we will get proper error with hmm_range_fault() */ hmm_range_wait_until_valid(&range, TIMEOUT_IN_MSEC); goto again; diff --git a/include/linux/hmm.h b/include/linux/hmm.h index 15f1b113be3c..f3693dcc8b98 100644 --- a/include/linux/hmm.h +++ b/include/linux/hmm.h @@ -412,7 +412,9 @@ void hmm_range_unregister(struct hmm_range *range); */ #define HMM_FAULT_ALLOW_RETRY (1 << 0) -long hmm_range_snapshot(struct hmm_range *range); +/* Don't fault in missing PTEs, just snapshot the current state. */ +#define HMM_FAULT_SNAPSHOT (1 << 1) + long hmm_range_fault(struct hmm_range *range, unsigned int flags); long hmm_range_dma_map(struct hmm_range *range, diff --git a/mm/hmm.c b/mm/hmm.c index 84f2791d3510..1bc014cddd78 100644 --- a/mm/hmm.c +++ b/mm/hmm.c @@ -280,7 +280,6 @@ struct hmm_vma_walk { struct hmm_range *range; struct dev_pagemap *pgmap; unsigned long last; - bool fault; unsigned int flags; }; @@ -373,7 +372,7 @@ static inline void hmm_pte_need_fault(const struct hmm_vma_walk *hmm_vma_walk, { struct hmm_range *range = hmm_vma_walk->range; - if (!hmm_vma_walk->fault) + if (hmm_vma_walk->flags & HMM_FAULT_SNAPSHOT) return; /* @@ -418,7 +417,7 @@ static void hmm_range_need_fault(const struct hmm_vma_walk *hmm_vma_walk, { unsigned long i; - if (!hmm_vma_walk->fault) { + if (hmm_vma_walk->flags & HMM_FAULT_SNAPSHOT) { *fault = *write_fault = false; return; } @@ -936,85 +935,6 @@ void hmm_range_unregister(struct hmm_range *range) } EXPORT_SYMBOL(hmm_range_unregister); -/* - * hmm_range_snapshot() - snapshot CPU page table for a range - * @range: range - * Return: -EINVAL if invalid argument, -ENOMEM out of memory, -EPERM invalid - * permission (for instance asking for write and range is read only), - * -EBUSY if you need to retry, -EFAULT invalid (ie either no valid - * vma or it is illegal to access that range), number of valid pages - * in range->pfns[] (from range start address). - * - * This snapshots the CPU page table for a range of virtual addresses. Snapshot - * validity is tracked by range struct. See in include/linux/hmm.h for example - * on how to use. - */ -long hmm_range_snapshot(struct hmm_range *range) -{ - const unsigned long device_vma = VM_IO | VM_PFNMAP | VM_MIXEDMAP; - unsigned long start = range->start, end; - struct hmm_vma_walk hmm_vma_walk; - struct hmm *hmm = range->hmm; - struct vm_area_struct *vma; - struct mm_walk mm_walk; - - lockdep_assert_held(&hmm->mm->mmap_sem); - do { - /* If range is no longer valid force retry. */ - if (!range->valid) - return -EBUSY; - - vma = find_vma(hmm->mm, start); - if (vma == NULL || (vma->vm_flags & device_vma)) - return -EFAULT; - - if (is_vm_hugetlb_page(vma)) { - if (huge_page_shift(hstate_vma(vma)) != - range->page_shift && - range->page_shift != PAGE_SHIFT) - return -EINVAL; - } else { - if (range->page_shift != PAGE_SHIFT) - return -EINVAL; - } - - if (!(vma->vm_flags & VM_READ)) { - /* - * If vma do not allow read access, then assume that it - * does not allow write access, either. HMM does not - * support architecture that allow write without read. - */ - hmm_pfns_clear(range, range->pfns, - range->start, range->end); - return -EPERM; - } - - range->vma = vma; - hmm_vma_walk.pgmap = NULL; - hmm_vma_walk.last = start; - hmm_vma_walk.fault = false; - hmm_vma_walk.range = range; - mm_walk.private = &hmm_vma_walk; - end = min(range->end, vma->vm_end); - - mm_walk.vma = vma; - mm_walk.mm = vma->vm_mm; - mm_walk.pte_entry = NULL; - mm_walk.test_walk = NULL; - mm_walk.hugetlb_entry = NULL; - mm_walk.pud_entry = hmm_vma_walk_pud; - mm_walk.pmd_entry = hmm_vma_walk_pmd; - mm_walk.pte_hole = hmm_vma_walk_hole; - mm_walk.hugetlb_entry = hmm_vma_walk_hugetlb_entry; - - walk_page_range(start, end, &mm_walk); - start = end; - } while (start < range->end); - - return (hmm_vma_walk.last - range->start) >> PAGE_SHIFT; -} -EXPORT_SYMBOL(hmm_range_snapshot); - /** * hmm_range_fault - try to fault some address in a virtual address range * @range: range being faulted @@ -1088,7 +1008,6 @@ long hmm_range_fault(struct hmm_range *range, unsigned int flags) range->vma = vma; hmm_vma_walk.pgmap = NULL; hmm_vma_walk.last = start; - hmm_vma_walk.fault = true; hmm_vma_walk.flags = flags; hmm_vma_walk.range = range; mm_walk.private = &hmm_vma_walk; From patchwork Fri Jul 26 00:56:48 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Ralph Campbell X-Patchwork-Id: 11060097 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id C8893138D for ; Fri, 26 Jul 2019 00:57:14 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id B9B4D28A38 for ; Fri, 26 Jul 2019 00:57:14 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id AD31F28A74; Fri, 26 Jul 2019 00:57:14 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-3.0 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,MAILING_LIST_MULTI,RCVD_IN_DNSWL_NONE autolearn=ham version=3.3.1 Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 0062228A38 for ; Fri, 26 Jul 2019 00:57:13 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 3FE708E0003; Thu, 25 Jul 2019 20:57:10 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 3B0C08E0002; Thu, 25 Jul 2019 20:57:10 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 22AB38E0003; Thu, 25 Jul 2019 20:57:10 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from mail-yb1-f200.google.com (mail-yb1-f200.google.com [209.85.219.200]) by kanga.kvack.org (Postfix) with ESMTP id 020258E0002 for ; Thu, 25 Jul 2019 20:57:10 -0400 (EDT) Received: by mail-yb1-f200.google.com with SMTP id e66so32221641ybe.19 for ; Thu, 25 Jul 2019 17:57:09 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding:dkim-signature; bh=GyZ3LYNDHdPknOwN7r8eo0WLWm3OjKPNgGYZWHO3koQ=; b=Ux5KpoMKe/LMAAMnleWRsMCodASA93Y2pB6S+MdWWU7U49+3943YKoxCfVbmC0g4zV hWH1YSxapn3OAPL3xc+39/KO3YzORnNeB+4ZnrvK83teG8F+SuBwhWyoL7WiZr+REkhZ hNZrATH38Qm7CQKWRYCnSIdGoTtQZnur+6URmXFVXF67P9Xi3GbiIToqPBRP0+h4hN5Q 3kHehG2nYXUB9L32GlfSG/iQ0R/1ytVCDlncV+GpZCMwn+4cv67fupjrm2hBQSIsZ35E twtK7RmPaZrYvyJkiM1cxb4iyI6++pftaG7b3cmmAAH4Zk8WxkxFtqm8rjftWWwu3uI+ Phgg== X-Gm-Message-State: APjAAAX45HBUP7q7oK6nFNy4iZugR/ZGhAdGJNct9Y4u6tdKb8N06blH /E6cFFN8FQHrDJPX75or1mLlKe0Adao1JrmYuMIKVEJY3xB7n454ghGPWBCuW5Lf3wdCqESEiPZ qJUtWT/24pJT3i9Ft1yDflHwf+7t4PAyqZmjDdQ8hlXzvBRGkNPK3+rebTqxZIFuqxQ== X-Received: by 2002:a81:a491:: with SMTP id b139mr57263151ywh.148.1564102629770; Thu, 25 Jul 2019 17:57:09 -0700 (PDT) X-Google-Smtp-Source: APXvYqyWjBTWpv8hbVc+LP0vysY+D3K5QQH/OH8mgBYPNd8RvGSPRjYAxH5Y+HmIE0aHWT7tiBNm X-Received: by 2002:a81:a491:: with SMTP id b139mr57263125ywh.148.1564102628718; Thu, 25 Jul 2019 17:57:08 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1564102628; cv=none; d=google.com; s=arc-20160816; b=nCCzzMd60ZdOiBHKBbIgmAkInFKy/4o9HYLlAMvY/eUOTLgPtQ5mXwPVVq4vb1xMU6 jXINXXYmPDyy/G97rp9ICJdYXqKWGxTIJRs7ugUJKc8eumOnnrcz4/jQAHsgMwGMvcwX Uokm+wIDvMC2Sf2ROlPUND6NTl0MXMTT4dd4iJPovJziSq/+Z6D8VFOnFzV8uEZtXZkE 3tRzOCnHJv+77yTlrEcAwGhBRLiPCpRO6qvU6AstAxGSHubT9QkjEBTngpVSg6Q1mug3 T+77v3KgcpCUAEh9vQILdCfl73Oi1oVDzHF0884IwdI7KEn3KpyC/AeJvXeaeGPRjg0O CP0g== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=dkim-signature:content-transfer-encoding:mime-version:references :in-reply-to:message-id:date:subject:cc:to:from; bh=GyZ3LYNDHdPknOwN7r8eo0WLWm3OjKPNgGYZWHO3koQ=; b=b9rRdh+sEtjO7JMZcWwvjluyYOF1PRU9kpELk1yxqjfZnA+mO8B94KhfcxyrH1BO27 RfYkS+3bSgaNIedpsitJ6pPj9im4ce0jw+RRJu/R99R1Q2aY5DWP+4DC9nczUhd366Al eOoKAz61VGS4cvLV73X1wmTos1AvkFIQbqNJJxb/TgnMoNvG1sv355KcznRlGxCW3vWt 5rMumM+55Gwkz5zSPzKsFWSV2sD4Pl0/nvMrfA9sPqSH8figM66jEX3Lr0pBaydRUazY mu59HINGGgYO8tkIrQ2sdx4MsCDJEd/DIkYvkUAOplIxLNx2MBTagVI5xCFBbsUVOZwW Fliw== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@nvidia.com header.s=n1 header.b=jkL74WID; spf=pass (google.com: domain of rcampbell@nvidia.com designates 216.228.121.65 as permitted sender) smtp.mailfrom=rcampbell@nvidia.com; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=nvidia.com Received: from hqemgate16.nvidia.com (hqemgate16.nvidia.com. [216.228.121.65]) by mx.google.com with ESMTPS id e195si19118991ybh.158.2019.07.25.17.57.08 for (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Thu, 25 Jul 2019 17:57:08 -0700 (PDT) Received-SPF: pass (google.com: domain of rcampbell@nvidia.com designates 216.228.121.65 as permitted sender) client-ip=216.228.121.65; Authentication-Results: mx.google.com; dkim=pass header.i=@nvidia.com header.s=n1 header.b=jkL74WID; spf=pass (google.com: domain of rcampbell@nvidia.com designates 216.228.121.65 as permitted sender) smtp.mailfrom=rcampbell@nvidia.com; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=nvidia.com Received: from hqpgpgate101.nvidia.com (Not Verified[216.228.121.13]) by hqemgate16.nvidia.com (using TLS: TLSv1.2, DES-CBC3-SHA) id ; Thu, 25 Jul 2019 17:57:05 -0700 Received: from hqmail.nvidia.com ([172.20.161.6]) by hqpgpgate101.nvidia.com (PGP Universal service); Thu, 25 Jul 2019 17:57:07 -0700 X-PGP-Universal: processed; by hqpgpgate101.nvidia.com on Thu, 25 Jul 2019 17:57:07 -0700 Received: from HQMAIL101.nvidia.com (172.20.187.10) by HQMAIL104.nvidia.com (172.18.146.11) with Microsoft SMTP Server (TLS) id 15.0.1473.3; Fri, 26 Jul 2019 00:57:06 +0000 Received: from hqnvemgw01.nvidia.com (172.20.150.20) by HQMAIL101.nvidia.com (172.20.187.10) with Microsoft SMTP Server (TLS) id 15.0.1473.3 via Frontend Transport; Fri, 26 Jul 2019 00:57:07 +0000 Received: from rcampbell-dev.nvidia.com (Not Verified[10.110.48.66]) by hqnvemgw01.nvidia.com with Trustwave SEG (v7,5,8,10121) id ; Thu, 25 Jul 2019 17:57:06 -0700 From: Ralph Campbell To: CC: , , , , "Ralph Campbell" , =?utf-8?b?SsOpcsO0bWUgR2xpc3Nl?= , Jason Gunthorpe , Christoph Hellwig Subject: [PATCH v2 5/7] mm/hmm: make full use of walk_page_range() Date: Thu, 25 Jul 2019 17:56:48 -0700 Message-ID: <20190726005650.2566-6-rcampbell@nvidia.com> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20190726005650.2566-1-rcampbell@nvidia.com> References: <20190726005650.2566-1-rcampbell@nvidia.com> MIME-Version: 1.0 X-NVConfidentiality: public DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=nvidia.com; s=n1; t=1564102625; bh=GyZ3LYNDHdPknOwN7r8eo0WLWm3OjKPNgGYZWHO3koQ=; h=X-PGP-Universal:From:To:CC:Subject:Date:Message-ID:X-Mailer: In-Reply-To:References:MIME-Version:X-NVConfidentiality: Content-Type:Content-Transfer-Encoding; b=jkL74WIDmQa1QlupaV/bl/KqbTUH1ksEkEBYqKDVCv0gPT18+fIWkOYzYvA/JhIOQ WoaBiatsfQp4apyOEgWJmHcoEdUPR4iyjTzywVKgyNNm8afzWbdQXMTa9/hxtthL6D /LE78tMf67hTVoBMr8RtJYQVzlwb6HbvgmT6F/zz/N1x6ESVVbuCpM+rz7BljsWl27 uC5J4Fa93NYNIxPT1yRulAC0TfLgGY7Kah/fiFeHkpm+tBTYaVXBUZGdk4m3bkr4fB RJUcwZpJbDqishirX9ybkZn+bh2oq918k2LQCl/jh1pLxkW0lN0hR53X03mm8AOhfy 4ymbSl04N3uMQ== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: X-Virus-Scanned: ClamAV using ClamSMTP hmm_range_fault() calls find_vma() and walk_page_range() in a loop. This is unnecessary duplication since walk_page_range() calls find_vma() in a loop already. Simplify hmm_range_fault() by defining a walk_test() callback function to filter unhandled vmas. Signed-off-by: Ralph Campbell Cc: "Jérôme Glisse" Cc: Jason Gunthorpe Cc: Christoph Hellwig Reviewed-by: Christoph Hellwig --- mm/hmm.c | 130 ++++++++++++++++++++++++------------------------------- 1 file changed, 57 insertions(+), 73 deletions(-) diff --git a/mm/hmm.c b/mm/hmm.c index 1bc014cddd78..838cd1d50497 100644 --- a/mm/hmm.c +++ b/mm/hmm.c @@ -840,13 +840,44 @@ static int hmm_vma_walk_hugetlb_entry(pte_t *pte, unsigned long hmask, #endif } -static void hmm_pfns_clear(struct hmm_range *range, - uint64_t *pfns, - unsigned long addr, - unsigned long end) +static int hmm_vma_walk_test(unsigned long start, + unsigned long end, + struct mm_walk *walk) { - for (; addr < end; addr += PAGE_SIZE, pfns++) - *pfns = range->values[HMM_PFN_NONE]; + struct hmm_vma_walk *hmm_vma_walk = walk->private; + struct hmm_range *range = hmm_vma_walk->range; + struct vm_area_struct *vma = walk->vma; + + /* If range is no longer valid, force retry. */ + if (!range->valid) + return -EBUSY; + + /* + * Skip vma ranges that don't have struct page backing them or + * map I/O devices directly. + * TODO: handle peer-to-peer device mappings. + */ + if (vma->vm_flags & (VM_IO | VM_PFNMAP | VM_MIXEDMAP)) + return -EFAULT; + + if (is_vm_hugetlb_page(vma)) { + if (huge_page_shift(hstate_vma(vma)) != range->page_shift && + range->page_shift != PAGE_SHIFT) + return -EINVAL; + } else { + if (range->page_shift != PAGE_SHIFT) + return -EINVAL; + } + + /* + * If vma does not allow read access, then assume that it does not + * allow write access, either. HMM does not support architectures + * that allow write without read. + */ + if (!(vma->vm_flags & VM_READ)) + return -EPERM; + + return 0; } /* @@ -965,82 +996,35 @@ EXPORT_SYMBOL(hmm_range_unregister); */ long hmm_range_fault(struct hmm_range *range, unsigned int flags) { - const unsigned long device_vma = VM_IO | VM_PFNMAP | VM_MIXEDMAP; - unsigned long start = range->start, end; - struct hmm_vma_walk hmm_vma_walk; + unsigned long start = range->start; + struct hmm_vma_walk hmm_vma_walk = {}; struct hmm *hmm = range->hmm; - struct vm_area_struct *vma; - struct mm_walk mm_walk; + struct mm_walk mm_walk = {}; int ret; lockdep_assert_held(&hmm->mm->mmap_sem); - do { - /* If range is no longer valid force retry. */ - if (!range->valid) - return -EBUSY; + hmm_vma_walk.range = range; + hmm_vma_walk.last = start; + hmm_vma_walk.flags = flags; + mm_walk.private = &hmm_vma_walk; - vma = find_vma(hmm->mm, start); - if (vma == NULL || (vma->vm_flags & device_vma)) - return -EFAULT; - - if (is_vm_hugetlb_page(vma)) { - if (huge_page_shift(hstate_vma(vma)) != - range->page_shift && - range->page_shift != PAGE_SHIFT) - return -EINVAL; - } else { - if (range->page_shift != PAGE_SHIFT) - return -EINVAL; - } + mm_walk.mm = hmm->mm; + mm_walk.pud_entry = hmm_vma_walk_pud; + mm_walk.pmd_entry = hmm_vma_walk_pmd; + mm_walk.pte_hole = hmm_vma_walk_hole; + mm_walk.hugetlb_entry = hmm_vma_walk_hugetlb_entry; + mm_walk.test_walk = hmm_vma_walk_test; - if (!(vma->vm_flags & VM_READ)) { - /* - * If vma do not allow read access, then assume that it - * does not allow write access, either. HMM does not - * support architecture that allow write without read. - */ - hmm_pfns_clear(range, range->pfns, - range->start, range->end); - return -EPERM; - } + do { + ret = walk_page_range(start, range->end, &mm_walk); + start = hmm_vma_walk.last; - range->vma = vma; - hmm_vma_walk.pgmap = NULL; - hmm_vma_walk.last = start; - hmm_vma_walk.flags = flags; - hmm_vma_walk.range = range; - mm_walk.private = &hmm_vma_walk; - end = min(range->end, vma->vm_end); - - mm_walk.vma = vma; - mm_walk.mm = vma->vm_mm; - mm_walk.pte_entry = NULL; - mm_walk.test_walk = NULL; - mm_walk.hugetlb_entry = NULL; - mm_walk.pud_entry = hmm_vma_walk_pud; - mm_walk.pmd_entry = hmm_vma_walk_pmd; - mm_walk.pte_hole = hmm_vma_walk_hole; - mm_walk.hugetlb_entry = hmm_vma_walk_hugetlb_entry; - - do { - ret = walk_page_range(start, end, &mm_walk); - start = hmm_vma_walk.last; - - /* Keep trying while the range is valid. */ - } while (ret == -EBUSY && range->valid); - - if (ret) { - unsigned long i; - - i = (hmm_vma_walk.last - range->start) >> PAGE_SHIFT; - hmm_pfns_clear(range, &range->pfns[i], - hmm_vma_walk.last, range->end); - return ret; - } - start = end; + /* Keep trying while the range is valid. */ + } while (ret == -EBUSY && range->valid); - } while (start < range->end); + if (ret) + return ret; return (hmm_vma_walk.last - range->start) >> PAGE_SHIFT; } From patchwork Fri Jul 26 00:56:49 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Ralph Campbell X-Patchwork-Id: 11060115 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 5504F1398 for ; Fri, 26 Jul 2019 00:57:33 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 4522E28845 for ; Fri, 26 Jul 2019 00:57:33 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 3905928A38; Fri, 26 Jul 2019 00:57:33 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-3.0 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,MAILING_LIST_MULTI autolearn=ham version=3.3.1 Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 0737128A15 for ; Fri, 26 Jul 2019 00:57:19 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 02E158E0006; Thu, 25 Jul 2019 20:57:16 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id F221B8E0002; Thu, 25 Jul 2019 20:57:15 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id D99C08E0006; Thu, 25 Jul 2019 20:57:15 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from mail-yw1-f72.google.com (mail-yw1-f72.google.com [209.85.161.72]) by kanga.kvack.org (Postfix) with ESMTP id BC5E18E0002 for ; Thu, 25 Jul 2019 20:57:15 -0400 (EDT) Received: by mail-yw1-f72.google.com with SMTP id p18so38204127ywe.17 for ; Thu, 25 Jul 2019 17:57:15 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding:dkim-signature; bh=D2MJs2fZHD43Md8ZpV6DlNuTDZTSVoq5UF6AxcLBY88=; b=RJEQ8wvIoyU4fCkV0AbnJqpjhXo5AkGfJf/4eMqc+F9YkcZ60siEWDRLoC4KpVQ0PJ qXKYxr+EQvqw/371smYHKItH49hA3U8DDMLrPSR1lGJQ5sNTHj/CGMAhkeU2jEQrTRve rgiG8hNByvnypvbH4MrexIRd2qi3dbLv/guNq7MIwTZu8Kcl6GW6ZmvviYpMGlDYig6C 0EPk5Ve9JNNFJyMAL8BqMMsFYfJmIUpnk5OLIXsXqEUoKYZv6oWKyhsKiJZGrHqyfcQG HrcN2amqBqljCSR0AY5EMW0shst05bAM1iKgjoRMA2w3NyxsDH6oi1mdJjmrU4Z312yu 1jDg== X-Gm-Message-State: APjAAAWSYk9+i12X0z6y+wAzQwpq9eXaBC8clHTuWzVaOJlkdI/aD/bT 1IHQjLeJVt61rpESa6D3RbAQ8/XIhX+qozjrXy8cOhk1U5B9HLzVHqaVepfAF2HUAN+Sj6Bnv/z sLd693d/HM3fbTFCD09S0KJbIocJI/t6rzq0R2KCgVXF0bg14GkRfygJrBZ3jPuVriA== X-Received: by 2002:a81:50a:: with SMTP id 10mr54853457ywf.129.1564102635568; Thu, 25 Jul 2019 17:57:15 -0700 (PDT) X-Google-Smtp-Source: APXvYqzpeOs09SRNYmukTHel+ubCL2qqgVO5+lyl06ekIC3w8Y/830Z7slTV/wKosRo3B/h5V7d4 X-Received: by 2002:a81:50a:: with SMTP id 10mr54853444ywf.129.1564102635076; Thu, 25 Jul 2019 17:57:15 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1564102635; cv=none; d=google.com; s=arc-20160816; b=i6Rr1AE1M9HYZwzZG8oRReCknpMUnP/3Cy8w6A7V2oVj6/8mCWrQ+dlRagIDqNNPw0 P8gWcZ2iu0dkZtAgqVAbVjOiKqCXyhzp4Foway/pmnToNw+R5A+xUipwSW7Iq5m9aNpH +GAC/pH6e9mLqqb5pIzfC0KPOKsRdGxi11YO1gJVfxjlswMJhwnflHLTWjmmxeGlEXdK pfOE71fkIB1IxaZCZLtMZtcsWonc+ciTMlxdP6gVhioB0LWWIBp5KD/K5JNvZ3jqDAHw lOt+S1+L2BVBN5u0AKdX9QvJPPvDtQByA8199m3Nr5fHCpEaDGMC83kn/gvmACU3AOq+ Ml0w== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=dkim-signature:content-transfer-encoding:mime-version:references :in-reply-to:message-id:date:subject:cc:to:from; bh=D2MJs2fZHD43Md8ZpV6DlNuTDZTSVoq5UF6AxcLBY88=; b=F/v2OQTzt9rtbloXj8gK8KA7TrIj+q21fiTGEeMqpVwkjaW42FY4tLBfA9rI0alWp7 KzGywfXcGSR1ldbEN7OVpwkrkajxCiiZ26tOPYkCq47WdYFiXt+Ttd6stS0Y8KekBcq0 s4oeNp6SJZUq0qbB4ugiX2cbIM2++3bzFceFj8/LfPyc9EXIgohdOO+JSTmsKT2LConm UHe+QNxLZpbs5uiPIthuVhsDmpBa9PoBsYCwgKzIcA+9CQJuPaioIY+70QEeakqiFxmn MZ25cm8zqMDxhXIPS/+eRneZ+1rneLOTq4AUz0VLOLMu7/qCylml+EdEbMyQMkHKWrpb hK9w== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@nvidia.com header.s=n1 header.b=dDhXdZ0f; spf=pass (google.com: domain of rcampbell@nvidia.com designates 216.228.121.143 as permitted sender) smtp.mailfrom=rcampbell@nvidia.com; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=nvidia.com Received: from hqemgate14.nvidia.com (hqemgate14.nvidia.com. [216.228.121.143]) by mx.google.com with ESMTPS id h9si16330130ywb.114.2019.07.25.17.57.14 for (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Thu, 25 Jul 2019 17:57:15 -0700 (PDT) Received-SPF: pass (google.com: domain of rcampbell@nvidia.com designates 216.228.121.143 as permitted sender) client-ip=216.228.121.143; Authentication-Results: mx.google.com; dkim=pass header.i=@nvidia.com header.s=n1 header.b=dDhXdZ0f; spf=pass (google.com: domain of rcampbell@nvidia.com designates 216.228.121.143 as permitted sender) smtp.mailfrom=rcampbell@nvidia.com; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=nvidia.com Received: from hqpgpgate101.nvidia.com (Not Verified[216.228.121.13]) by hqemgate14.nvidia.com (using TLS: TLSv1.2, DES-CBC3-SHA) id ; Thu, 25 Jul 2019 17:57:15 -0700 Received: from hqmail.nvidia.com ([172.20.161.6]) by hqpgpgate101.nvidia.com (PGP Universal service); Thu, 25 Jul 2019 17:57:14 -0700 X-PGP-Universal: processed; by hqpgpgate101.nvidia.com on Thu, 25 Jul 2019 17:57:14 -0700 Received: from HQMAIL111.nvidia.com (172.20.187.18) by HQMAIL104.nvidia.com (172.18.146.11) with Microsoft SMTP Server (TLS) id 15.0.1473.3; Fri, 26 Jul 2019 00:57:13 +0000 Received: from HQMAIL107.nvidia.com (172.20.187.13) by HQMAIL111.nvidia.com (172.20.187.18) with Microsoft SMTP Server (TLS) id 15.0.1473.3; Fri, 26 Jul 2019 00:57:08 +0000 Received: from hqnvemgw01.nvidia.com (172.20.150.20) by HQMAIL107.nvidia.com (172.20.187.13) with Microsoft SMTP Server (TLS) id 15.0.1473.3 via Frontend Transport; Fri, 26 Jul 2019 00:57:08 +0000 Received: from rcampbell-dev.nvidia.com (Not Verified[10.110.48.66]) by hqnvemgw01.nvidia.com with Trustwave SEG (v7,5,8,10121) id ; Thu, 25 Jul 2019 17:57:08 -0700 From: Ralph Campbell To: CC: , , , , "Ralph Campbell" , =?utf-8?b?SsOpcsO0bWUgR2xpc3Nl?= , Jason Gunthorpe , Christoph Hellwig Subject: [PATCH v2 6/7] mm/hmm: remove hugetlbfs check in hmm_vma_walk_pmd Date: Thu, 25 Jul 2019 17:56:49 -0700 Message-ID: <20190726005650.2566-7-rcampbell@nvidia.com> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20190726005650.2566-1-rcampbell@nvidia.com> References: <20190726005650.2566-1-rcampbell@nvidia.com> MIME-Version: 1.0 X-NVConfidentiality: public DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=nvidia.com; s=n1; t=1564102635; bh=D2MJs2fZHD43Md8ZpV6DlNuTDZTSVoq5UF6AxcLBY88=; h=X-PGP-Universal:From:To:CC:Subject:Date:Message-ID:X-Mailer: In-Reply-To:References:MIME-Version:X-NVConfidentiality: Content-Type:Content-Transfer-Encoding; b=dDhXdZ0fq+/bJJsVQCR7FJrU8TlXvz2itR9WvHA+D+zGwHRjQv48omdl45RnxXmpB lVwB46LJQ7I1jmYZq8f2HDTONx4QL6yNAHgoDlCDzhsrayuWPJZUlqCiHsvrejtQAo IzffaEtC6GLs+mrZgw+/TfanZcDKiKjQzJg6UcKUCcQzs0xxBR7vdmXnZdWx1QuO65 lsl3wTSKpIjNMBnruP2rDqPtA1oWApYCh1VuHm12T+FlHh0ONlL8rrfRqIuenKDf4H l85a/V9vbnHfXJYcg4Hz8fOoowAC4ZLQH9YSs7dtBwWkS9sagT3o5JY0D0Y19kfUqB dfO9F2MBE+xGg== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: X-Virus-Scanned: ClamAV using ClamSMTP walk_page_range() will only call hmm_vma_walk_hugetlb_entry() for hugetlbfs pages and doesn't call hmm_vma_walk_pmd() in this case. Therefore, it is safe to remove the check for vma->vm_flags & VM_HUGETLB in hmm_vma_walk_pmd(). Signed-off-by: Ralph Campbell Cc: "Jérôme Glisse" Cc: Jason Gunthorpe Cc: Christoph Hellwig Reviewed-by: Christoph Hellwig --- mm/hmm.c | 3 --- 1 file changed, 3 deletions(-) diff --git a/mm/hmm.c b/mm/hmm.c index 838cd1d50497..29f322ca5d58 100644 --- a/mm/hmm.c +++ b/mm/hmm.c @@ -630,9 +630,6 @@ static int hmm_vma_walk_pmd(pmd_t *pmdp, if (pmd_none(pmd)) return hmm_vma_walk_hole(start, end, walk); - if (pmd_huge(pmd) && (range->vma->vm_flags & VM_HUGETLB)) - return hmm_pfns_bad(start, end, walk); - if (thp_migration_supported() && is_pmd_migration_entry(pmd)) { bool fault, write_fault; unsigned long npages; From patchwork Fri Jul 26 00:56:50 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Ralph Campbell X-Patchwork-Id: 11060103 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 578C317EF for ; Fri, 26 Jul 2019 00:57:17 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 44A0728A15 for ; Fri, 26 Jul 2019 00:57:17 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 3876C28A38; Fri, 26 Jul 2019 00:57:17 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-3.0 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,MAILING_LIST_MULTI,RCVD_IN_DNSWL_NONE autolearn=ham version=3.3.1 Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id AA6CB28A78 for ; Fri, 26 Jul 2019 00:57:16 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 33A348E0005; Thu, 25 Jul 2019 20:57:15 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 2EA088E0002; Thu, 25 Jul 2019 20:57:15 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 2063B8E0005; Thu, 25 Jul 2019 20:57:15 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from mail-yb1-f197.google.com (mail-yb1-f197.google.com [209.85.219.197]) by kanga.kvack.org (Postfix) with ESMTP id E3B938E0002 for ; Thu, 25 Jul 2019 20:57:14 -0400 (EDT) Received: by mail-yb1-f197.google.com with SMTP id p20so39084488yba.17 for ; Thu, 25 Jul 2019 17:57:14 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding:dkim-signature; bh=NZUo+DqUJdY6mTpO9a6/Oba2m7JMCHGUDDnmqd09n1Y=; b=ox2pVPobtZTtCGQDPd7EJ3Nnf4ftl4JV0BKr9I9PnMWo9Fqxxiz3lHyT90VfsUqyCa mQH+A6GWGVDJ7nLM1xlpLxR0pOmqQ3aA5jnktMpBljGjC24yLlHiptMGdjZIA6PxLj61 mdki0fVVwZu2oOdA7pFi9TM35UElilsGIOiTYTMAxAKKYPBw+5FUUqD1A3UiiRyHUxLE w1qTTh03LH8odP69ODZ4dcYdanAoaLRlU3vc1/9q0qWPLXcPFFvcFFaspb/Zq//RJ0j9 9cPe438SfPJWymhW7yYbeYuFAa3OjgHAASJ9mL3FFyDU1FqpitIerWJDNf8w07HwMXm8 p1Bg== X-Gm-Message-State: APjAAAXem1vRHSF8/9XcUbED2G/N6KNAZv6DhNYk9eyATsxUX93uVcXG sLCanBw2jtf1XtkDKhaCHH+dsMYwm795wj9uQvgfp7YeP3TWEVv50bbcWKoTbS5cj4tV3l020ow NwSB2w0hdB0GvC/qfT0N+MGvekJZmqEJ7PI37/nm4faH52O4lAIMzjgDBl3V4Y7g/3Q== X-Received: by 2002:a81:3795:: with SMTP id e143mr57862558ywa.508.1564102634705; Thu, 25 Jul 2019 17:57:14 -0700 (PDT) X-Google-Smtp-Source: APXvYqxvqWEFwZQkrfitA6Kkkd4w1v3ABCwPOhKtz4Ae1P/VP6Ne/o2p7pXg6UyVMRNtnHz6JLkX X-Received: by 2002:a81:3795:: with SMTP id e143mr57862547ywa.508.1564102634113; Thu, 25 Jul 2019 17:57:14 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1564102634; cv=none; d=google.com; s=arc-20160816; b=JGOSglk+U7k/phAZ3L6lO3kqioSsQ/BqJXd6BwPqzos6AioYJFY0sTVK3OugGLjGkL rMJNT7BAc+GkHkFlhvOCZ86XLuXkBd3Cv3IWcmTbkXj4pqFHXFMo2znTjWxIE8ZEWGR8 CsPTdIeGLx0fV4kAL5ge7TA9Bcf6g7YGDVkHkYKXmzGvTd0Cbvh0nVaAhslReGh70AkE vo49QT20pl+V/OLJszG97K4twH+jcejjoEHt1TyLSZwR7gJYOWz/kunQbINqK7yePmwJ f14Lb2xi4WLsyw16G1khP9s4gIRQFtLcJKoNfe0EBLQT5xbhVMkt9rMOdjGg4a+jdWEM oeww== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=dkim-signature:content-transfer-encoding:mime-version:references :in-reply-to:message-id:date:subject:cc:to:from; bh=NZUo+DqUJdY6mTpO9a6/Oba2m7JMCHGUDDnmqd09n1Y=; b=PGr281f8W9DM52iiVHkdGHF/97S50p3y/zx2PMoHaEJmGmOVGEsTLZE2RjemHvz3nL 5pFaK7ipABSsu2sLwp3r+HmAMKKilLZKf/ZqJkkIa7jQAfdGcneVwB1hIOX2Us0H3VC2 xybIJdl5XwheSPqCdO6rYc6t3yIAMSAyTr+wgxxN0KZK14tggyb822t41LLMGaUAqYZT 7/44ErGsewAVWzsWCl2D73XCRh/ri01KvKX5pIRbAiXVDsAUNX9TAUNx4JJYL9FAEVjc NSAnBKra/UI947QSkUnXJtg+NSIWSR3FXJA6K3QhUNz5JSQvjJJ5xHGRNKhGJhxrFrPe n86w== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@nvidia.com header.s=n1 header.b=Y+Fdrsy5; spf=pass (google.com: domain of rcampbell@nvidia.com designates 216.228.121.143 as permitted sender) smtp.mailfrom=rcampbell@nvidia.com; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=nvidia.com Received: from hqemgate14.nvidia.com (hqemgate14.nvidia.com. [216.228.121.143]) by mx.google.com with ESMTPS id v135si18614327ywc.53.2019.07.25.17.57.13 for (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Thu, 25 Jul 2019 17:57:14 -0700 (PDT) Received-SPF: pass (google.com: domain of rcampbell@nvidia.com designates 216.228.121.143 as permitted sender) client-ip=216.228.121.143; Authentication-Results: mx.google.com; dkim=pass header.i=@nvidia.com header.s=n1 header.b=Y+Fdrsy5; spf=pass (google.com: domain of rcampbell@nvidia.com designates 216.228.121.143 as permitted sender) smtp.mailfrom=rcampbell@nvidia.com; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=nvidia.com Received: from hqpgpgate101.nvidia.com (Not Verified[216.228.121.13]) by hqemgate14.nvidia.com (using TLS: TLSv1.2, DES-CBC3-SHA) id ; Thu, 25 Jul 2019 17:57:14 -0700 Received: from hqmail.nvidia.com ([172.20.161.6]) by hqpgpgate101.nvidia.com (PGP Universal service); Thu, 25 Jul 2019 17:57:13 -0700 X-PGP-Universal: processed; by hqpgpgate101.nvidia.com on Thu, 25 Jul 2019 17:57:13 -0700 Received: from HQMAIL109.nvidia.com (172.20.187.15) by HQMAIL108.nvidia.com (172.18.146.13) with Microsoft SMTP Server (TLS) id 15.0.1473.3; Fri, 26 Jul 2019 00:57:11 +0000 Received: from hqnvemgw01.nvidia.com (172.20.150.20) by HQMAIL109.nvidia.com (172.20.187.15) with Microsoft SMTP Server (TLS) id 15.0.1473.3 via Frontend Transport; Fri, 26 Jul 2019 00:57:11 +0000 Received: from rcampbell-dev.nvidia.com (Not Verified[10.110.48.66]) by hqnvemgw01.nvidia.com with Trustwave SEG (v7,5,8,10121) id ; Thu, 25 Jul 2019 17:57:11 -0700 From: Ralph Campbell To: CC: , , , , "Ralph Campbell" , Jason Gunthorpe , =?utf-8?b?SsOpcsO0bWUgR2xpc3Nl?= , Christoph Hellwig Subject: [PATCH v2 7/7] mm/hmm: remove hmm_range vma Date: Thu, 25 Jul 2019 17:56:50 -0700 Message-ID: <20190726005650.2566-8-rcampbell@nvidia.com> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20190726005650.2566-1-rcampbell@nvidia.com> References: <20190726005650.2566-1-rcampbell@nvidia.com> MIME-Version: 1.0 X-NVConfidentiality: public DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=nvidia.com; s=n1; t=1564102634; bh=NZUo+DqUJdY6mTpO9a6/Oba2m7JMCHGUDDnmqd09n1Y=; h=X-PGP-Universal:From:To:CC:Subject:Date:Message-ID:X-Mailer: In-Reply-To:References:MIME-Version:X-NVConfidentiality: Content-Type:Content-Transfer-Encoding; b=Y+Fdrsy5YTiGjTNAPneWU7N67hB8atyhiYSL7w1D9fRjSIU0CfPzTBs1+TJzPLeRd lZg+U+KeM5lIDv23Uued3ANTxSgkBOSMlBDcqTqFQrMZetgu7wTZJ/18nJTgFUlQH7 9M9bqjlekVURUgBu+xXY9uNZnDjVOHAMuds+n9WmwMP9IafFxNItlvDkk5+7xf168C /jBL6ulVlFTzr3sEKhNy9uO7sVIGou26UW/rn8gdQOF2MPV5eBWbEIWuUlRjHkUVyb MyIizWpUTiSts0J7sLQ9gk5Raxzo7iAUIw4fsLk+jXjvBR9uvF6CPcFSA49r+/WBWC ev8fK2+UEnDJg== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: X-Virus-Scanned: ClamAV using ClamSMTP Since hmm_range_fault() doesn't use the struct hmm_range vma field, remove it. Suggested-by: Jason Gunthorpe Signed-off-by: Ralph Campbell Cc: "Jérôme Glisse" Cc: Christoph Hellwig Reviewed-by: Christoph Hellwig --- drivers/gpu/drm/nouveau/nouveau_svm.c | 7 +++---- include/linux/hmm.h | 1 - 2 files changed, 3 insertions(+), 5 deletions(-) diff --git a/drivers/gpu/drm/nouveau/nouveau_svm.c b/drivers/gpu/drm/nouveau/nouveau_svm.c index 49b520c60fc5..a74530b5a523 100644 --- a/drivers/gpu/drm/nouveau/nouveau_svm.c +++ b/drivers/gpu/drm/nouveau/nouveau_svm.c @@ -496,12 +496,12 @@ nouveau_range_fault(struct hmm_mirror *mirror, struct hmm_range *range) range->start, range->end, PAGE_SHIFT); if (ret) { - up_read(&range->vma->vm_mm->mmap_sem); + up_read(&range->hmm->mm->mmap_sem); return (int)ret; } if (!hmm_range_wait_until_valid(range, HMM_RANGE_DEFAULT_TIMEOUT)) { - up_read(&range->vma->vm_mm->mmap_sem); + up_read(&range->hmm->mm->mmap_sem); return -EBUSY; } @@ -509,7 +509,7 @@ nouveau_range_fault(struct hmm_mirror *mirror, struct hmm_range *range) if (ret <= 0) { if (ret == 0) ret = -EBUSY; - up_read(&range->vma->vm_mm->mmap_sem); + up_read(&range->hmm->mm->mmap_sem); hmm_range_unregister(range); return ret; } @@ -682,7 +682,6 @@ nouveau_svm_fault(struct nvif_notify *notify) args.i.p.addr + args.i.p.size, fn - fi); /* Have HMM fault pages within the fault window to the GPU. */ - range.vma = vma; range.start = args.i.p.addr; range.end = args.i.p.addr + args.i.p.size; range.pfns = args.phys; diff --git a/include/linux/hmm.h b/include/linux/hmm.h index f3693dcc8b98..68949cf815f9 100644 --- a/include/linux/hmm.h +++ b/include/linux/hmm.h @@ -164,7 +164,6 @@ enum hmm_pfn_value_e { */ struct hmm_range { struct hmm *hmm; - struct vm_area_struct *vma; struct list_head list; unsigned long start; unsigned long end;