From patchwork Mon May 6 23:29:38 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ralph Campbell X-Patchwork-Id: 10932015 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 5E5351390 for ; Mon, 6 May 2019 23:30:48 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 4722428820 for ; Mon, 6 May 2019 23:30:48 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 38FE42887B; Mon, 6 May 2019 23:30:48 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-3.0 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,MAILING_LIST_MULTI,RCVD_IN_DNSWL_NONE autolearn=ham version=3.3.1 Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id CB23C28820 for ; Mon, 6 May 2019 23:30:46 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id C4E026B0006; Mon, 6 May 2019 19:30:45 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id BFE736B0007; Mon, 6 May 2019 19:30:45 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id AC6586B0008; Mon, 6 May 2019 19:30:45 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from mail-pf1-f199.google.com (mail-pf1-f199.google.com [209.85.210.199]) by kanga.kvack.org (Postfix) with ESMTP id 68AF96B0006 for ; Mon, 6 May 2019 19:30:45 -0400 (EDT) Received: by mail-pf1-f199.google.com with SMTP id 17so8873907pfi.12 for ; Mon, 06 May 2019 16:30:45 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding:dkim-signature; bh=8eIkxvqHvVaScHU2FC2TXtOhyiZOmYmAEq4GJNLP4D8=; b=YUE7xNY4xqFk6kFRAypvTPPnE5vbTTRs54ZFnSqOf9aZY+xe0fBCdDJIMdnMR2FuAK Cax6ygSumVuOAMJssHH7pVFVYAzf1XaeX6CL4ACT1NkQw4ByjYmwW6Uo8kiTFBIgTl6D YS7ZNYZMIMRBw3ZFuV0+uzkwkzvMmhQwMpGy71Fr5LAQNh0SePIJMSLim7X8SNLD7S6S S7vYPMC+9ClrWFf9juPpGDMRkFkXYWQoqVlZIS0CAs0eQM9tXT5PsgRe5OjPrGHvDi/q f8Gf5G0MPqkUkbAo3hEfnbLBaBE+Xx9xcWO5O9dbu38FLPaTfC6dsi/4krmRaZsdPguX 1nhQ== X-Gm-Message-State: APjAAAWCaCPscYo+rvZn5ywskk7kf6o0+FVXDuwOtKx4KKFatLZ97ERt Odos9ssKqrrzgsj+QmbqRCUyA+anOish8Ycy7aZY4nQBCFtvv/dEhtYE/kaKntKV/8ghEh13gI9 KbMJJ6myl9+UbM42wX9oyiKxPKNXJVOlpwbXhfzA7q751Wb3yecRbfI0kMxnAU3DNUA== X-Received: by 2002:a17:902:2aa6:: with SMTP id j35mr36100241plb.236.1557185444995; Mon, 06 May 2019 16:30:44 -0700 (PDT) X-Google-Smtp-Source: APXvYqwGbX0vMJNB8fuiC1sj46CBJPNHx9XtHkzWKTEJ1ga+CrrVy8Dr5qVaADlVuqPXDFsWbkF3 X-Received: by 2002:a17:902:2aa6:: with SMTP id j35mr36100142plb.236.1557185443841; Mon, 06 May 2019 16:30:43 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1557185443; cv=none; d=google.com; s=arc-20160816; b=i8XKNDFcvKs6O/4brM08g3uuYsfkowBGs8SmlLPsDE/+lNJgkix7rXwclbcHdKHSkU cal7gIFV4eIKlAPSfA1hOK5MmuubMtX9xqVsiCN9KVsm+bbtn+l98JwQhPMPhRlQIvST qHCX3ia2pPWhxlmWaJSCg6WC7008dUg8z2EVIjLX6w+T+J0OdrK45eFlddBeoQW/Rf7L PRIByP2tApCiXyxxNEkAJHvKoAOqtpcpxwizHCQCB2JPp2F3FhDngrIxLZwMQkkIZJLJ 4m7dsQudSeEarhULPuZSMsH/QA0Bojrj860YcNZELjLfhdcdNy93QpBEbe9RjThCQNKB K2EQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=dkim-signature:content-transfer-encoding:mime-version:references :in-reply-to:message-id:date:subject:cc:to:from; bh=8eIkxvqHvVaScHU2FC2TXtOhyiZOmYmAEq4GJNLP4D8=; b=yqZzpouBq1FTCJ7v4H0tx7fGTDxMEkWt2R+szYN9oEmc38yfA6Ao3qvE1niynh2HVn H4IuyKnoUK1nEYDk7PAccvy6+nS4QRhSJzFAQf0mPXXWeBxmZMwMo7rEIgxpmRCthRGL C/i3NbLPaQFJBikE1MHpzwGRdNvlWPLUUBCtnStu9uX8gpNgwMQlrhvQotUAjlPNwKQd A60hbodmv8XeSndhit48vaReCM40Bzbm4rjAyXX/qjbcLi7ql0aGJ5tEnsXMSxtBq5rs mzMZv6dEtJdlz0AozG3CyX+UP2jyfCRAjpT0jRx8pJ8hNU0BSCPv/+4vUrxYG5a9trD2 Anhw== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@nvidia.com header.s=n1 header.b=BHCfXCdC; spf=pass (google.com: domain of rcampbell@nvidia.com designates 216.228.121.65 as permitted sender) smtp.mailfrom=rcampbell@nvidia.com; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=nvidia.com Received: from hqemgate16.nvidia.com (hqemgate16.nvidia.com. [216.228.121.65]) by mx.google.com with ESMTPS id 1si16657196pgx.176.2019.05.06.16.30.43 for (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Mon, 06 May 2019 16:30:43 -0700 (PDT) Received-SPF: pass (google.com: domain of rcampbell@nvidia.com designates 216.228.121.65 as permitted sender) client-ip=216.228.121.65; Authentication-Results: mx.google.com; dkim=pass header.i=@nvidia.com header.s=n1 header.b=BHCfXCdC; spf=pass (google.com: domain of rcampbell@nvidia.com designates 216.228.121.65 as permitted sender) smtp.mailfrom=rcampbell@nvidia.com; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=nvidia.com Received: from hqpgpgate101.nvidia.com (Not Verified[216.228.121.13]) by hqemgate16.nvidia.com (using TLS: TLSv1.2, DES-CBC3-SHA) id ; Mon, 06 May 2019 16:30:39 -0700 Received: from hqmail.nvidia.com ([172.20.161.6]) by hqpgpgate101.nvidia.com (PGP Universal service); Mon, 06 May 2019 16:30:43 -0700 X-PGP-Universal: processed; by hqpgpgate101.nvidia.com on Mon, 06 May 2019 16:30:43 -0700 Received: from rcampbell-dev.nvidia.com (172.20.13.39) by HQMAIL101.nvidia.com (172.20.187.10) with Microsoft SMTP Server (TLS) id 15.0.1473.3; Mon, 6 May 2019 23:30:42 +0000 From: To: CC: , Ralph Campbell , John Hubbard , Ira Weiny , Dan Williams , Arnd Bergmann , Balbir Singh , Dan Carpenter , Matthew Wilcox , Souptick Joarder , Andrew Morton Subject: [PATCH 1/5] mm/hmm: Update HMM documentation Date: Mon, 6 May 2019 16:29:38 -0700 Message-ID: <20190506232942.12623-2-rcampbell@nvidia.com> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20190506232942.12623-1-rcampbell@nvidia.com> References: <20190506232942.12623-1-rcampbell@nvidia.com> MIME-Version: 1.0 X-NVConfidentiality: public X-Originating-IP: [172.20.13.39] X-ClientProxiedBy: HQMAIL107.nvidia.com (172.20.187.13) To HQMAIL101.nvidia.com (172.20.187.10) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=nvidia.com; s=n1; t=1557185439; bh=8eIkxvqHvVaScHU2FC2TXtOhyiZOmYmAEq4GJNLP4D8=; h=X-PGP-Universal:From:To:CC:Subject:Date:Message-ID:X-Mailer: In-Reply-To:References:MIME-Version:X-NVConfidentiality: X-Originating-IP:X-ClientProxiedBy:Content-Transfer-Encoding: Content-Type; b=BHCfXCdCMEMnATb8ROceswf+RLdd7AVuKfz+AfK3qZAAww/Q1APkIlzDzbITSNvKA ORcxD8aYXT+9iAhoKG/0465WmmPcYqJZ/47cbvCm80SdAOeVu7Osr4xUQxzqJaAtn+ 9l8Ac1pH4AR+oU8QDNOgkg+YOwzmUIRqRZ50nw9FFR0Bpa5ZcvrAYg9ue5zV1V55Y3 aHb+CkhVDF8ZZsNThelTq8dCAFoK8b9TUtoR8mUNPp1r/bgz7UOpbV+aU2Cl7efV1c PJkI3Ux8u7SgnZWn4SwJgOco6EuC6UPN8vgnH4b6HraOnlZTc+QQwexus9oDTD8exp u8bLlIVMiDGsQ== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: X-Virus-Scanned: ClamAV using ClamSMTP From: Ralph Campbell Update the HMM documentation to reflect the latest API and make a few minor wording changes. Signed-off-by: Ralph Campbell Cc: John Hubbard Cc: Ira Weiny Cc: Dan Williams Cc: Arnd Bergmann Cc: Balbir Singh Cc: Dan Carpenter Cc: Matthew Wilcox Cc: Souptick Joarder Cc: Andrew Morton --- Documentation/vm/hmm.rst | 139 ++++++++++++++++++++------------------- 1 file changed, 73 insertions(+), 66 deletions(-) diff --git a/Documentation/vm/hmm.rst b/Documentation/vm/hmm.rst index ec1efa32af3c..7c1e929931a0 100644 --- a/Documentation/vm/hmm.rst +++ b/Documentation/vm/hmm.rst @@ -10,7 +10,7 @@ of this being specialized struct page for such memory (see sections 5 to 7 of this document). HMM also provides optional helpers for SVM (Share Virtual Memory), i.e., -allowing a device to transparently access program address coherently with +allowing a device to transparently access program addresses coherently with the CPU meaning that any valid pointer on the CPU is also a valid pointer for the device. This is becoming mandatory to simplify the use of advanced heterogeneous computing where GPU, DSP, or FPGA are used to perform various @@ -22,8 +22,8 @@ expose the hardware limitations that are inherent to many platforms. The third section gives an overview of the HMM design. The fourth section explains how CPU page-table mirroring works and the purpose of HMM in this context. The fifth section deals with how device memory is represented inside the kernel. -Finally, the last section presents a new migration helper that allows lever- -aging the device DMA engine. +Finally, the last section presents a new migration helper that allows +leveraging the device DMA engine. .. contents:: :local: @@ -39,20 +39,20 @@ address space. I use shared address space to refer to the opposite situation: i.e., one in which any application memory region can be used by a device transparently. -Split address space happens because device can only access memory allocated -through device specific API. This implies that all memory objects in a program +Split address space happens because devices can only access memory allocated +through a device specific API. This implies that all memory objects in a program are not equal from the device point of view which complicates large programs that rely on a wide set of libraries. -Concretely this means that code that wants to leverage devices like GPUs needs -to copy object between generically allocated memory (malloc, mmap private, mmap +Concretely, this means that code that wants to leverage devices like GPUs needs +to copy objects between generically allocated memory (malloc, mmap private, mmap share) and memory allocated through the device driver API (this still ends up with an mmap but of the device file). For flat data sets (array, grid, image, ...) this isn't too hard to achieve but -complex data sets (list, tree, ...) are hard to get right. Duplicating a +for complex data sets (list, tree, ...) it's hard to get right. Duplicating a complex data set needs to re-map all the pointer relations between each of its -elements. This is error prone and program gets harder to debug because of the +elements. This is error prone and programs get harder to debug because of the duplicate data set and addresses. Split address space also means that libraries cannot transparently use data @@ -77,12 +77,12 @@ I/O bus, device memory characteristics I/O buses cripple shared address spaces due to a few limitations. Most I/O buses only allow basic memory access from device to main memory; even cache -coherency is often optional. Access to device memory from CPU is even more +coherency is often optional. Access to device memory from a CPU is even more limited. More often than not, it is not cache coherent. If we only consider the PCIE bus, then a device can access main memory (often through an IOMMU) and be cache coherent with the CPUs. However, it only allows -a limited set of atomic operations from device on main memory. This is worse +a limited set of atomic operations from the device on main memory. This is worse in the other direction: the CPU can only access a limited range of the device memory and cannot perform atomic operations on it. Thus device memory cannot be considered the same as regular memory from the kernel point of view. @@ -93,20 +93,20 @@ The final limitation is latency. Access to main memory from the device has an order of magnitude higher latency than when the device accesses its own memory. Some platforms are developing new I/O buses or additions/modifications to PCIE -to address some of these limitations (OpenCAPI, CCIX). They mainly allow two- -way cache coherency between CPU and device and allow all atomic operations the +to address some of these limitations (OpenCAPI, CCIX). They mainly allow +two-way cache coherency between CPU and device and allow all atomic operations the architecture supports. Sadly, not all platforms are following this trend and some major architectures are left without hardware solutions to these problems. So for shared address space to make sense, not only must we allow devices to access any memory but we must also permit any memory to be migrated to device -memory while device is using it (blocking CPU access while it happens). +memory while the device is using it (blocking CPU access while it happens). Shared address space and migration ================================== -HMM intends to provide two main features. First one is to share the address +HMM intends to provide two main features. The first one is to share the address space by duplicating the CPU page table in the device page table so the same address points to the same physical memory for any valid main memory address in the process address space. @@ -121,14 +121,14 @@ why HMM provides helpers to factor out everything that can be while leaving the hardware specific details to the device driver. The second mechanism HMM provides is a new kind of ZONE_DEVICE memory that -allows allocating a struct page for each page of the device memory. Those pages +allows allocating a struct page for each page of device memory. Those pages are special because the CPU cannot map them. However, they allow migrating main memory to device memory using existing migration mechanisms and everything -looks like a page is swapped out to disk from the CPU point of view. Using a -struct page gives the easiest and cleanest integration with existing mm mech- -anisms. Here again, HMM only provides helpers, first to hotplug new ZONE_DEVICE +looks like a page that is swapped out to disk from the CPU point of view. Using a +struct page gives the easiest and cleanest integration with existing mm +mechanisms. Here again, HMM only provides helpers, first to hotplug new ZONE_DEVICE memory for the device memory and second to perform migration. Policy decisions -of what and when to migrate things is left to the device driver. +of what and when to migrate is left to the device driver. Note that any CPU access to a device page triggers a page fault and a migration back to main memory. For example, when a page backing a given CPU address A is @@ -136,8 +136,8 @@ migrated from a main memory page to a device page, then any CPU access to address A triggers a page fault and initiates a migration back to main memory. With these two features, HMM not only allows a device to mirror process address -space and keeping both CPU and device page table synchronized, but also lever- -ages device memory by migrating the part of the data set that is actively being +space and keeps both CPU and device page tables synchronized, but also +leverages device memory by migrating the part of the data set that is actively being used by the device. @@ -151,21 +151,27 @@ registration of an hmm_mirror struct:: int hmm_mirror_register(struct hmm_mirror *mirror, struct mm_struct *mm); - int hmm_mirror_register_locked(struct hmm_mirror *mirror, - struct mm_struct *mm); - -The locked variant is to be used when the driver is already holding mmap_sem -of the mm in write mode. The mirror struct has a set of callbacks that are used +The mirror struct has a set of callbacks that are used to propagate CPU page tables:: struct hmm_mirror_ops { + /* release() - release hmm_mirror + * + * @mirror: pointer to struct hmm_mirror + * + * This is called when the mm_struct is being released. + * The callback should make sure no references to the mirror occur + * after the callback returns. + */ + void (*release)(struct hmm_mirror *mirror); + /* sync_cpu_device_pagetables() - synchronize page tables * * @mirror: pointer to struct hmm_mirror - * @update_type: type of update that occurred to the CPU page table - * @start: virtual start address of the range to update - * @end: virtual end address of the range to update + * @update: update information (see struct mmu_notifier_range) + * Return: -EAGAIN if update.blockable false and callback need to + * block, 0 otherwise. * * This callback ultimately originates from mmu_notifiers when the CPU * page table is updated. The device driver must update its page table @@ -176,14 +182,12 @@ to propagate CPU page tables:: * page tables are completely updated (TLBs flushed, etc); this is a * synchronous call. */ - void (*update)(struct hmm_mirror *mirror, - enum hmm_update action, - unsigned long start, - unsigned long end); + int (*sync_cpu_device_pagetables)(struct hmm_mirror *mirror, + const struct hmm_update *update); }; The device driver must perform the update action to the range (mark range -read only, or fully unmap, ...). The device must be done with the update before +read only, or fully unmap, etc.). The device must complete the update before the driver callback returns. When the device driver wants to populate a range of virtual addresses, it can @@ -194,17 +198,18 @@ use either:: The first one (hmm_range_snapshot()) will only fetch present CPU page table entries and will not trigger a page fault on missing or non-present entries. -The second one does trigger a page fault on missing or read-only entry if the -write parameter is true. Page faults use the generic mm page fault code path -just like a CPU page fault. +The second one does trigger a page fault on missing or read-only entries if +write access is requested (see below). Page faults use the generic mm page +fault code path just like a CPU page fault. Both functions copy CPU page table entries into their pfns array argument. Each entry in that array corresponds to an address in the virtual range. HMM provides a set of flags to help the driver identify special CPU page table entries. -Locking with the update() callback is the most important aspect the driver must -respect in order to keep things properly synchronized. The usage pattern is:: +Locking within the sync_cpu_device_pagetables() callback is the most important +aspect the driver must respect in order to keep things properly synchronized. +The usage pattern is:: int driver_populate_range(...) { @@ -243,7 +248,7 @@ respect in order to keep things properly synchronized. The usage pattern is:: return ret; } take_lock(driver->update); - if (!range.valid) { + if (!hmm_range_valid(&range)) { release_lock(driver->update); up_read(&mm->mmap_sem); goto again; @@ -258,8 +263,8 @@ respect in order to keep things properly synchronized. The usage pattern is:: } The driver->update lock is the same lock that the driver takes inside its -update() callback. That lock must be held before checking the range.valid -field to avoid any race with a concurrent CPU page table update. +sync_cpu_device_pagetables() callback. That lock must be held before calling +hmm_range_valid() to avoid any race with a concurrent CPU page table update. HMM implements all this on top of the mmu_notifier API because we wanted a simpler API and also to be able to perform optimizations latter on like doing @@ -279,44 +284,46 @@ concurrently). Leverage default_flags and pfn_flags_mask ========================================= -The hmm_range struct has 2 fields default_flags and pfn_flags_mask that allows -to set fault or snapshot policy for a whole range instead of having to set them -for each entries in the range. +The hmm_range struct has 2 fields, default_flags and pfn_flags_mask, that specify +fault or snapshot policy for the whole range instead of having to set them +for each entry in the pfns array. + +For instance, if the device flags for range.flags are:: -For instance if the device flags for device entries are: - VALID (1 << 63) - WRITE (1 << 62) + range.flags[HMM_PFN_VALID] = (1 << 63); + range.flags[HMM_PFN_WRITE] = (1 << 62); -Now let say that device driver wants to fault with at least read a range then -it does set: - range->default_flags = (1 << 63) +and the device driver wants pages for a range with at least read permission, +it sets:: + + range->default_flags = (1 << 63); range->pfn_flags_mask = 0; -and calls hmm_range_fault() as described above. This will fill fault all page +and calls hmm_range_fault() as described above. This will fill fault all pages in the range with at least read permission. -Now let say driver wants to do the same except for one page in the range for -which its want to have write. Now driver set: +Now let's say the driver wants to do the same except for one page in the range for +which it wants to have write permission. Now driver set: range->default_flags = (1 << 63); range->pfn_flags_mask = (1 << 62); range->pfns[index_of_write] = (1 << 62); -With this HMM will fault in all page with at least read (ie valid) and for the +With this, HMM will fault in all pages with at least read (i.e., valid) and for the address == range->start + (index_of_write << PAGE_SHIFT) it will fault with -write permission ie if the CPU pte does not have write permission set then HMM +write permission i.e., if the CPU pte does not have write permission set then HMM will call handle_mm_fault(). -Note that HMM will populate the pfns array with write permission for any entry -that have write permission within the CPU pte no matter what are the values set +Note that HMM will populate the pfns array with write permission for any page +that is mapped with CPU write permission no matter what values are set in default_flags or pfn_flags_mask. Represent and manage device memory from core kernel point of view ================================================================= -Several different designs were tried to support device memory. First one used -a device specific data structure to keep information about migrated memory and -HMM hooked itself in various places of mm code to handle any access to +Several different designs were tried to support device memory. The first one +used a device specific data structure to keep information about migrated memory +and HMM hooked itself in various places of mm code to handle any access to addresses that were backed by device memory. It turns out that this ended up replicating most of the fields of struct page and also needed many kernel code paths to be updated to understand this new kind of memory. @@ -339,7 +346,7 @@ The hmm_devmem_ops is where most of the important things are:: struct hmm_devmem_ops { void (*free)(struct hmm_devmem *devmem, struct page *page); - int (*fault)(struct hmm_devmem *devmem, + vm_fault_t (*fault)(struct hmm_devmem *devmem, struct vm_area_struct *vma, unsigned long addr, struct page *page, @@ -415,9 +422,9 @@ willing to pay to keep all the code simpler. Memory cgroup (memcg) and rss accounting ======================================== -For now device memory is accounted as any regular page in rss counters (either +For now, device memory is accounted as any regular page in rss counters (either anonymous if device page is used for anonymous, file if device page is used for -file backed page or shmem if device page is used for shared memory). This is a +file backed page, or shmem if device page is used for shared memory). This is a deliberate choice to keep existing applications, that might start using device memory without knowing about it, running unimpacted. @@ -437,6 +444,6 @@ get more experience in how device memory is used and its impact on memory resource control. -Note that device memory can never be pinned by device driver nor through GUP +Note that device memory can never be pinned by a device driver nor through GUP and thus such memory is always free upon process exit. Or when last reference is dropped in case of shared memory or file backed memory. From patchwork Mon May 6 23:29:39 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ralph Campbell X-Patchwork-Id: 10932017 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 20D11933 for ; Mon, 6 May 2019 23:30:55 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 0DEF228820 for ; Mon, 6 May 2019 23:30:55 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 01923288EE; Mon, 6 May 2019 23:30:54 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-3.0 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,MAILING_LIST_MULTI,RCVD_IN_DNSWL_NONE autolearn=ham version=3.3.1 Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 89D4428820 for ; Mon, 6 May 2019 23:30:53 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 91A486B0007; Mon, 6 May 2019 19:30:52 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 8CA746B000A; Mon, 6 May 2019 19:30:52 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 76AC76B000C; Mon, 6 May 2019 19:30:52 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from mail-pl1-f197.google.com (mail-pl1-f197.google.com [209.85.214.197]) by kanga.kvack.org (Postfix) with ESMTP id 344066B0007 for ; Mon, 6 May 2019 19:30:52 -0400 (EDT) Received: by mail-pl1-f197.google.com with SMTP id q18so8052752pll.16 for ; Mon, 06 May 2019 16:30:52 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding:dkim-signature; bh=lIdFU1rqHJ9svPV3iBuugHZa71MJn+mQZ7zg5rlsxNo=; b=hVsMjHHcG1KPqelWcrmDX0LkW4s9I0+q7seT6ZJg3/izOG6lvxWbmDoWZJC11o/qBf RpFf9imcXo9RyIJ3nCeDUWfCRvsgKQ9vWcD4mdCCj5oFNnwkO5iI8Hr/diDebYnyBG4Q 0t1vFpydZ1OQbCZiE2bOkaryhAUpP8AWXmpG9ynPz5QEkUtFpZ3j1IfxmQ0ufunL5iBe h7mY5K2ASuv2s+qXswfrZFeXuLx8xQrU/6i5PtyPJQ8uXQEbHjtqu/1GlsFp74vzhwld gdWj1EMi4A7H50nLWlVhPLRT4M6oHjyUBcIf1s4m8MYLk4ACNkOcN1+JAzeOpGqQLC43 jQ9Q== X-Gm-Message-State: APjAAAXlHTMQCb26J7m8q+LWkvNlcvxADWGQgu2VTv0s200JSXZtOkQj rHVD2DxldPofzPaHp6PT3RKJ0+WgVXmUblkuXWqkjAOYm5bFs2bDzYVuSX2d4BR6GLggVaEobDt JrzB7xyKxJiHhZgfQ1gBm9faUNEiUieeFcVreYfc9DIrr1exzYWx7WyFGXoF+MR831w== X-Received: by 2002:a62:3501:: with SMTP id c1mr37961178pfa.184.1557185451752; Mon, 06 May 2019 16:30:51 -0700 (PDT) X-Google-Smtp-Source: APXvYqxhHwPgPH/pKfQystX1B3DgqNQPP02VLzo7IMYEHxslfIBycRxpdEpyznd/wTRB5m8ABY7m X-Received: by 2002:a62:3501:: with SMTP id c1mr37961010pfa.184.1557185449816; Mon, 06 May 2019 16:30:49 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1557185449; cv=none; d=google.com; s=arc-20160816; b=tpfryQl5UGkKIH/gYm/Tc6X4CIwQNAqVXBqm9mPOOzEKFDXCoBM5llBBpY02N+fLtT U+03YGIqKER9V08sKwtDMltfMDkLl30vNS5drdJlUIK/vmz2Rpjg+5JLlCn1VK5vjUoC jrT4wL9H9i+G+HCL0OL47QxGuT4Xj8PQ3TztzMPbDfUaVB9bL0RAxz9RLYuAygWiUA4t RMDTZScDcE+tqB2IbOnPxael3H+rdnBW2tRzOt6Fisg1fsFMnl1QT/RwjAs3Xob9a7ji FVM1RArxXdgGXqHEZK8kwjeKB0gfgI7AasWpvDRkguw0ROcK0nRIkl5ulJi1Ly6o7MaG 57bA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=dkim-signature:content-transfer-encoding:mime-version:references :in-reply-to:message-id:date:subject:cc:to:from; bh=lIdFU1rqHJ9svPV3iBuugHZa71MJn+mQZ7zg5rlsxNo=; b=MPw2vpYo1JI73bmivucYfMzenYJs4CMfS1ytPeFmJAfoXHEP7tSr/fcAyZ6XUz862I QM2u/tCB2DG+i/O11tPy4gNZMIFnaSaSJ1Y8i+A6Ea5Qd4noWub3CtMADgi/0/KJX13g YHWKo3U1S7VKUw25UhMw3nCV6k699kuH2wqWP+i1gOElbYfRFCRwxmPyKzxK1fD8VezH gY/jswpBjU1MjaOG21t1NEDBcLKPGyMz+ToUAuSILakv3hCuYsAUDyVRfA5aLQxB3b4M 01IQG7NcRytDzTc1ckbHMan3qJrkbM19lrs/14rrPMa0v+DxNtA7+N1PCk7bAAlqz3yd e6+Q== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@nvidia.com header.s=n1 header.b=RSgjsP4c; spf=pass (google.com: domain of rcampbell@nvidia.com designates 216.228.121.65 as permitted sender) smtp.mailfrom=rcampbell@nvidia.com; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=nvidia.com Received: from hqemgate16.nvidia.com (hqemgate16.nvidia.com. [216.228.121.65]) by mx.google.com with ESMTPS id e89si18628304plb.99.2019.05.06.16.30.49 for (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Mon, 06 May 2019 16:30:49 -0700 (PDT) Received-SPF: pass (google.com: domain of rcampbell@nvidia.com designates 216.228.121.65 as permitted sender) client-ip=216.228.121.65; Authentication-Results: mx.google.com; dkim=pass header.i=@nvidia.com header.s=n1 header.b=RSgjsP4c; spf=pass (google.com: domain of rcampbell@nvidia.com designates 216.228.121.65 as permitted sender) smtp.mailfrom=rcampbell@nvidia.com; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=nvidia.com Received: from hqpgpgate101.nvidia.com (Not Verified[216.228.121.13]) by hqemgate16.nvidia.com (using TLS: TLSv1.2, DES-CBC3-SHA) id ; Mon, 06 May 2019 16:30:45 -0700 Received: from hqmail.nvidia.com ([172.20.161.6]) by hqpgpgate101.nvidia.com (PGP Universal service); Mon, 06 May 2019 16:30:49 -0700 X-PGP-Universal: processed; by hqpgpgate101.nvidia.com on Mon, 06 May 2019 16:30:49 -0700 Received: from rcampbell-dev.nvidia.com (172.20.13.39) by HQMAIL101.nvidia.com (172.20.187.10) with Microsoft SMTP Server (TLS) id 15.0.1473.3; Mon, 6 May 2019 23:30:48 +0000 From: To: CC: , Ralph Campbell , John Hubbard , Ira Weiny , Dan Williams , Arnd Bergmann , Balbir Singh , Dan Carpenter , Matthew Wilcox , Souptick Joarder , Andrew Morton Subject: [PATCH 2/5] mm/hmm: Clean up some coding style and comments Date: Mon, 6 May 2019 16:29:39 -0700 Message-ID: <20190506232942.12623-3-rcampbell@nvidia.com> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20190506232942.12623-1-rcampbell@nvidia.com> References: <20190506232942.12623-1-rcampbell@nvidia.com> MIME-Version: 1.0 X-NVConfidentiality: public X-Originating-IP: [172.20.13.39] X-ClientProxiedBy: HQMAIL107.nvidia.com (172.20.187.13) To HQMAIL101.nvidia.com (172.20.187.10) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=nvidia.com; s=n1; t=1557185445; bh=lIdFU1rqHJ9svPV3iBuugHZa71MJn+mQZ7zg5rlsxNo=; h=X-PGP-Universal:From:To:CC:Subject:Date:Message-ID:X-Mailer: In-Reply-To:References:MIME-Version:X-NVConfidentiality: X-Originating-IP:X-ClientProxiedBy:Content-Transfer-Encoding: Content-Type; b=RSgjsP4cWyGu7325S3r5+6bbBt4X8E9r47TkjuH5gs5tG0bEAgD5Kl3dYQSzbJo8o fhxnA/JW/hqRUGYpgipY2FRNexErCAyUC1YQV5S3nKFzqT+iSaUKD/QQalzlKQkgo9 bui4JW2YLMKAyCbiVGq3k/wX1j9gBqamiPqlgRrw861P/UFXzQVfsiLa5+gQNdlUvw t+axpH0K10Evm1hipSDGoxZCrbYk3BusIKX7DwWzzsYicpjiUuUqSXCGQZ/wMcSvW1 ovVUE65aG3AUJoIH8KN5rDPkIpsR9Wy2Pt9E3/o5oi0VuY+BhLYnUnDDoLnBk91Y5q Q0vLkS4Qmr20w== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: X-Virus-Scanned: ClamAV using ClamSMTP From: Ralph Campbell There are no functional changes, just some coding style clean ups and minor comment changes. Signed-off-by: Ralph Campbell Cc: John Hubbard Cc: Ira Weiny Cc: Dan Williams Cc: Arnd Bergmann Cc: Balbir Singh Cc: Dan Carpenter Cc: Matthew Wilcox Cc: Souptick Joarder Cc: Andrew Morton --- include/linux/hmm.h | 71 +++++++++++++++++++++++---------------------- mm/hmm.c | 51 ++++++++++++++++---------------- 2 files changed, 62 insertions(+), 60 deletions(-) diff --git a/include/linux/hmm.h b/include/linux/hmm.h index 51ec27a84668..35a429621e1e 100644 --- a/include/linux/hmm.h +++ b/include/linux/hmm.h @@ -30,8 +30,8 @@ * * HMM address space mirroring API: * - * Use HMM address space mirroring if you want to mirror range of the CPU page - * table of a process into a device page table. Here, "mirror" means "keep + * Use HMM address space mirroring if you want to mirror a range of the CPU + * page tables of a process into a device page table. Here, "mirror" means "keep * synchronized". Prerequisites: the device must provide the ability to write- * protect its page tables (at PAGE_SIZE granularity), and must be able to * recover from the resulting potential page faults. @@ -114,10 +114,11 @@ struct hmm { * HMM_PFN_WRITE: CPU page table has write permission set * HMM_PFN_DEVICE_PRIVATE: private device memory (ZONE_DEVICE) * - * The driver provide a flags array, if driver valid bit for an entry is bit - * 3 ie (entry & (1 << 3)) is true if entry is valid then driver must provide + * The driver provides a flags array for mapping page protections to device + * PTE bits. If the driver valid bit for an entry is bit 3, + * i.e., (entry & (1 << 3)), then the driver must provide * an array in hmm_range.flags with hmm_range.flags[HMM_PFN_VALID] == 1 << 3. - * Same logic apply to all flags. This is same idea as vm_page_prot in vma + * Same logic apply to all flags. This is the same idea as vm_page_prot in vma * except that this is per device driver rather than per architecture. */ enum hmm_pfn_flag_e { @@ -138,13 +139,13 @@ enum hmm_pfn_flag_e { * be mirrored by a device, because the entry will never have HMM_PFN_VALID * set and the pfn value is undefined. * - * Driver provide entry value for none entry, error entry and special entry, - * driver can alias (ie use same value for error and special for instance). It - * should not alias none and error or special. + * Driver provides values for none entry, error entry, and special entry. + * Driver can alias (i.e., use same value) error and special, but + * it should not alias none with error or special. * * HMM pfn value returned by hmm_vma_get_pfns() or hmm_vma_fault() will be: * hmm_range.values[HMM_PFN_ERROR] if CPU page table entry is poisonous, - * hmm_range.values[HMM_PFN_NONE] if there is no CPU page table + * hmm_range.values[HMM_PFN_NONE] if there is no CPU page table entry, * hmm_range.values[HMM_PFN_SPECIAL] if CPU page table entry is a special one */ enum hmm_pfn_value_e { @@ -167,6 +168,7 @@ enum hmm_pfn_value_e { * @values: pfn value for some special case (none, special, error, ...) * @default_flags: default flags for the range (write, read, ... see hmm doc) * @pfn_flags_mask: allows to mask pfn flags so that only default_flags matter + * @page_shift: device virtual address shift value (should be >= PAGE_SHIFT) * @pfn_shifts: pfn shift value (should be <= PAGE_SHIFT) * @valid: pfns array did not change since it has been fill by an HMM function */ @@ -189,7 +191,7 @@ struct hmm_range { /* * hmm_range_page_shift() - return the page shift for the range * @range: range being queried - * Returns: page shift (page size = 1 << page shift) for the range + * Return: page shift (page size = 1 << page shift) for the range */ static inline unsigned hmm_range_page_shift(const struct hmm_range *range) { @@ -199,7 +201,7 @@ static inline unsigned hmm_range_page_shift(const struct hmm_range *range) /* * hmm_range_page_size() - return the page size for the range * @range: range being queried - * Returns: page size for the range in bytes + * Return: page size for the range in bytes */ static inline unsigned long hmm_range_page_size(const struct hmm_range *range) { @@ -210,7 +212,7 @@ static inline unsigned long hmm_range_page_size(const struct hmm_range *range) * hmm_range_wait_until_valid() - wait for range to be valid * @range: range affected by invalidation to wait on * @timeout: time out for wait in ms (ie abort wait after that period of time) - * Returns: true if the range is valid, false otherwise. + * Return: true if the range is valid, false otherwise. */ static inline bool hmm_range_wait_until_valid(struct hmm_range *range, unsigned long timeout) @@ -231,7 +233,7 @@ static inline bool hmm_range_wait_until_valid(struct hmm_range *range, /* * hmm_range_valid() - test if a range is valid or not * @range: range - * Returns: true if the range is valid, false otherwise. + * Return: true if the range is valid, false otherwise. */ static inline bool hmm_range_valid(struct hmm_range *range) { @@ -242,7 +244,7 @@ static inline bool hmm_range_valid(struct hmm_range *range) * hmm_device_entry_to_page() - return struct page pointed to by a device entry * @range: range use to decode device entry value * @entry: device entry value to get corresponding struct page from - * Returns: struct page pointer if entry is a valid, NULL otherwise + * Return: struct page pointer if entry is a valid, NULL otherwise * * If the device entry is valid (ie valid flag set) then return the struct page * matching the entry value. Otherwise return NULL. @@ -265,7 +267,7 @@ static inline struct page *hmm_device_entry_to_page(const struct hmm_range *rang * hmm_device_entry_to_pfn() - return pfn value store in a device entry * @range: range use to decode device entry value * @entry: device entry to extract pfn from - * Returns: pfn value if device entry is valid, -1UL otherwise + * Return: pfn value if device entry is valid, -1UL otherwise */ static inline unsigned long hmm_device_entry_to_pfn(const struct hmm_range *range, uint64_t pfn) @@ -285,7 +287,7 @@ hmm_device_entry_to_pfn(const struct hmm_range *range, uint64_t pfn) * hmm_device_entry_from_page() - create a valid device entry for a page * @range: range use to encode HMM pfn value * @page: page for which to create the device entry - * Returns: valid device entry for the page + * Return: valid device entry for the page */ static inline uint64_t hmm_device_entry_from_page(const struct hmm_range *range, struct page *page) @@ -298,7 +300,7 @@ static inline uint64_t hmm_device_entry_from_page(const struct hmm_range *range, * hmm_device_entry_from_pfn() - create a valid device entry value from pfn * @range: range use to encode HMM pfn value * @pfn: pfn value for which to create the device entry - * Returns: valid device entry for the pfn + * Return: valid device entry for the pfn */ static inline uint64_t hmm_device_entry_from_pfn(const struct hmm_range *range, unsigned long pfn) @@ -403,7 +405,7 @@ enum hmm_update_event { }; /* - * struct hmm_update - HMM update informations for callback + * struct hmm_update - HMM update information for callback * * @start: virtual start address of the range to update * @end: virtual end address of the range to update @@ -436,8 +438,8 @@ struct hmm_mirror_ops { /* sync_cpu_device_pagetables() - synchronize page tables * * @mirror: pointer to struct hmm_mirror - * @update: update informations (see struct hmm_update) - * Returns: -EAGAIN if update.blockable false and callback need to + * @update: update information (see struct hmm_update) + * Return: -EAGAIN if update.blockable false and callback need to * block, 0 otherwise. * * This callback ultimately originates from mmu_notifiers when the CPU @@ -476,13 +478,13 @@ void hmm_mirror_unregister(struct hmm_mirror *mirror); /* * hmm_mirror_mm_is_alive() - test if mm is still alive * @mirror: the HMM mm mirror for which we want to lock the mmap_sem - * Returns: false if the mm is dead, true otherwise + * Return: false if the mm is dead, true otherwise * - * This is an optimization it will not accurately always return -EINVAL if the - * mm is dead ie there can be false negative (process is being kill but HMM is - * not yet inform of that). It is only intented to be use to optimize out case - * where driver is about to do something time consuming and it would be better - * to skip it if the mm is dead. + * This is an optimization, it will not always accurately return false if the + * mm is dead; i.e., there can be false negatives (process is being killed but + * HMM is not yet informed of that). It is only intended to be used to optimize + * out cases where the driver is about to do something time consuming and it + * would be better to skip it if the mm is dead. */ static inline bool hmm_mirror_mm_is_alive(struct hmm_mirror *mirror) { @@ -497,7 +499,6 @@ static inline bool hmm_mirror_mm_is_alive(struct hmm_mirror *mirror) return true; } - /* * Please see Documentation/vm/hmm.rst for how to use the range API. */ @@ -570,7 +571,7 @@ static inline int hmm_vma_fault(struct hmm_range *range, bool block) ret = hmm_range_fault(range, block); if (ret <= 0) { if (ret == -EBUSY || !ret) { - /* Same as above drop mmap_sem to match old API. */ + /* Same as above, drop mmap_sem to match old API. */ up_read(&range->vma->vm_mm->mmap_sem); ret = -EBUSY; } else if (ret == -EAGAIN) @@ -637,7 +638,7 @@ struct hmm_devmem_ops { * @page: pointer to struct page backing virtual address (unreliable) * @flags: FAULT_FLAG_* (see include/linux/mm.h) * @pmdp: page middle directory - * Returns: VM_FAULT_MINOR/MAJOR on success or one of VM_FAULT_ERROR + * Return: VM_FAULT_MINOR/MAJOR on success or one of VM_FAULT_ERROR * on error * * The callback occurs whenever there is a CPU page fault or GUP on a @@ -645,14 +646,14 @@ struct hmm_devmem_ops { * page back to regular memory (CPU accessible). * * The device driver is free to migrate more than one page from the - * fault() callback as an optimization. However if device decide to - * migrate more than one page it must always priotirize the faulting + * fault() callback as an optimization. However if the device decides + * to migrate more than one page it must always priotirize the faulting * address over the others. * - * The struct page pointer is only given as an hint to allow quick + * The struct page pointer is only given as a hint to allow quick * lookup of internal device driver data. A concurrent migration - * might have already free that page and the virtual address might - * not longer be back by it. So it should not be modified by the + * might have already freed that page and the virtual address might + * no longer be backed by it. So it should not be modified by the * callback. * * Note that mmap semaphore is held in read mode at least when this @@ -679,7 +680,7 @@ struct hmm_devmem_ops { * @ref: per CPU refcount * @page_fault: callback when CPU fault on an unaddressable device page * - * This an helper structure for device drivers that do not wish to implement + * This is a helper structure for device drivers that do not wish to implement * the gory details related to hotplugging new memoy and allocating struct * pages. * diff --git a/mm/hmm.c b/mm/hmm.c index 0db8491090b8..f6c4c8633db9 100644 --- a/mm/hmm.c +++ b/mm/hmm.c @@ -162,9 +162,8 @@ static void hmm_release(struct mmu_notifier *mn, struct mm_struct *mm) /* Wake-up everyone waiting on any range. */ mutex_lock(&hmm->lock); - list_for_each_entry(range, &hmm->ranges, list) { + list_for_each_entry(range, &hmm->ranges, list) range->valid = false; - } wake_up_all(&hmm->wq); mutex_unlock(&hmm->lock); @@ -175,9 +174,10 @@ static void hmm_release(struct mmu_notifier *mn, struct mm_struct *mm) list_del_init(&mirror->list); if (mirror->ops->release) { /* - * Drop mirrors_sem so callback can wait on any pending - * work that might itself trigger mmu_notifier callback - * and thus would deadlock with us. + * Drop mirrors_sem so the release callback can wait + * on any pending work that might itself trigger a + * mmu_notifier callback and thus would deadlock with + * us. */ up_write(&hmm->mirrors_sem); mirror->ops->release(mirror); @@ -232,11 +232,8 @@ static int hmm_invalidate_range_start(struct mmu_notifier *mn, int ret; ret = mirror->ops->sync_cpu_device_pagetables(mirror, &update); - if (!update.blockable && ret == -EAGAIN) { - up_read(&hmm->mirrors_sem); - ret = -EAGAIN; - goto out; - } + if (!update.blockable && ret == -EAGAIN) + break; } up_read(&hmm->mirrors_sem); @@ -280,6 +277,7 @@ static const struct mmu_notifier_ops hmm_mmu_notifier_ops = { * * @mirror: new mirror struct to register * @mm: mm to register against + * Return: 0 on success, -ENOMEM if no memory, -EINVAL if invalid arguments * * To start mirroring a process address space, the device driver must register * an HMM mirror struct. @@ -307,7 +305,7 @@ EXPORT_SYMBOL(hmm_mirror_register); /* * hmm_mirror_unregister() - unregister a mirror * - * @mirror: new mirror struct to register + * @mirror: mirror struct to unregister * * Stop mirroring a process address space, and cleanup. */ @@ -381,7 +379,7 @@ static int hmm_pfns_bad(unsigned long addr, * @fault: should we fault or not ? * @write_fault: write fault ? * @walk: mm_walk structure - * Returns: 0 on success, -EBUSY after page fault, or page fault error + * Return: 0 on success, -EBUSY after page fault, or page fault error * * This function will be called whenever pmd_none() or pte_none() returns true, * or whenever there is no page directory covering the virtual address range. @@ -924,6 +922,7 @@ int hmm_range_register(struct hmm_range *range, unsigned page_shift) { unsigned long mask = ((1UL << page_shift) - 1UL); + struct hmm *hmm; range->valid = false; range->hmm = NULL; @@ -947,18 +946,18 @@ int hmm_range_register(struct hmm_range *range, return -EFAULT; } - /* Initialize range to track CPU page table update */ + /* Initialize range to track CPU page table updates. */ mutex_lock(&range->hmm->lock); - list_add_rcu(&range->list, &range->hmm->ranges); + list_add_rcu(&range->list, &hmm->ranges); /* * If there are any concurrent notifiers we have to wait for them for * the range to be valid (see hmm_range_wait_until_valid()). */ - if (!range->hmm->notifiers) + if (!hmm->notifiers) range->valid = true; - mutex_unlock(&range->hmm->lock); + mutex_unlock(&hmm->lock); return 0; } @@ -973,17 +972,19 @@ EXPORT_SYMBOL(hmm_range_register); */ void hmm_range_unregister(struct hmm_range *range) { + struct hmm *hmm = range->hmm; + /* Sanity check this really should not happen. */ - if (range->hmm == NULL || range->end <= range->start) + if (hmm == NULL || range->end <= range->start) return; - mutex_lock(&range->hmm->lock); + mutex_lock(&hmm->lock); list_del_rcu(&range->list); - mutex_unlock(&range->hmm->lock); + mutex_unlock(&hmm->lock); /* Drop reference taken by hmm_range_register() */ range->valid = false; - hmm_put(range->hmm); + hmm_put(hmm); range->hmm = NULL; } EXPORT_SYMBOL(hmm_range_unregister); @@ -991,7 +992,7 @@ EXPORT_SYMBOL(hmm_range_unregister); /* * hmm_range_snapshot() - snapshot CPU page table for a range * @range: range - * Returns: -EINVAL if invalid argument, -ENOMEM out of memory, -EPERM invalid + * Return: -EINVAL if invalid argument, -ENOMEM out of memory, -EPERM invalid * permission (for instance asking for write and range is read only), * -EAGAIN if you need to retry, -EFAULT invalid (ie either no valid * vma or it is illegal to access that range), number of valid pages @@ -1075,7 +1076,7 @@ EXPORT_SYMBOL(hmm_range_snapshot); * hmm_range_fault() - try to fault some address in a virtual address range * @range: range being faulted * @block: allow blocking on fault (if true it sleeps and do not drop mmap_sem) - * Returns: number of valid pages in range->pfns[] (from range start + * Return: number of valid pages in range->pfns[] (from range start * address). This may be zero. If the return value is negative, * then one of the following values may be returned: * @@ -1193,7 +1194,7 @@ EXPORT_SYMBOL(hmm_range_fault); * @device: device against to dma map page to * @daddrs: dma address of mapped pages * @block: allow blocking on fault (if true it sleeps and do not drop mmap_sem) - * Returns: number of pages mapped on success, -EAGAIN if mmap_sem have been + * Return: number of pages mapped on success, -EAGAIN if mmap_sem have been * drop and you need to try again, some other error value otherwise * * Note same usage pattern as hmm_range_fault(). @@ -1281,7 +1282,7 @@ EXPORT_SYMBOL(hmm_range_dma_map); * @device: device against which dma map was done * @daddrs: dma address of mapped pages * @dirty: dirty page if it had the write flag set - * Returns: number of page unmapped on success, -EINVAL otherwise + * Return: number of page unmapped on success, -EINVAL otherwise * * Note that caller MUST abide by mmu notifier or use HMM mirror and abide * to the sync_cpu_device_pagetables() callback so that it is safe here to @@ -1404,7 +1405,7 @@ static void hmm_devmem_free(struct page *page, void *data) * @ops: memory event device driver callback (see struct hmm_devmem_ops) * @device: device struct to bind the resource too * @size: size in bytes of the device memory to add - * Returns: pointer to new hmm_devmem struct ERR_PTR otherwise + * Return: pointer to new hmm_devmem struct ERR_PTR otherwise * * This function first finds an empty range of physical address big enough to * contain the new resource, and then hotplugs it as ZONE_DEVICE memory, which From patchwork Mon May 6 23:29:40 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ralph Campbell X-Patchwork-Id: 10932019 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 694D51515 for ; Mon, 6 May 2019 23:30:56 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 57DF628820 for ; Mon, 6 May 2019 23:30:56 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 4C0012887B; Mon, 6 May 2019 23:30:56 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-3.0 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,MAILING_LIST_MULTI,RCVD_IN_DNSWL_NONE autolearn=ham version=3.3.1 Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id CCA8328820 for ; Mon, 6 May 2019 23:30:55 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id B8C376B0008; Mon, 6 May 2019 19:30:52 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id B61E86B000A; Mon, 6 May 2019 19:30:52 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id A02766B000D; Mon, 6 May 2019 19:30:52 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from mail-pf1-f197.google.com (mail-pf1-f197.google.com [209.85.210.197]) by kanga.kvack.org (Postfix) with ESMTP id 6A37B6B0008 for ; Mon, 6 May 2019 19:30:52 -0400 (EDT) Received: by mail-pf1-f197.google.com with SMTP id i123so4433920pfb.19 for ; Mon, 06 May 2019 16:30:52 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding:dkim-signature; bh=5WlIjBgfvDyvCsXkm0NjNZ11WXFUwiQ4B+j31tyLuKg=; b=SO6Nhi+WDJvcYVVkVADcs4l6vaxjFSJsrsD42a2aFO85/O8DAYRfxhWMRZdgcZV+R0 clKMm+UMA7kw9+wgzwgZIW/z5r1xwxf2lobpVFDa7mZ/u8aBMonAJ+vyU3cPADWBpH52 eNbGh3xKXL9slx115opvzChqhyZFqlISVuUDJvx0qtCLavLDcQVyvkt8s1snflOGTJH6 zLIiOCV2Kzr7WVp6VNGtrpxwpO7KKcmbYkj/ItcluJtglvWD4ZoJelx6BvmLw32xP/H6 Hor56/HMRddTVifFHfhal2NARLM+YcF4v61rDYVlgws6FaPK/iRY9Mw6dBaNnqq3eiH6 poCQ== X-Gm-Message-State: APjAAAWx+FaIRmASyq35W1hSSRVxWOedcWTVQgfgTFUWrF5AMuzWK1eC R4NS/A3DCxJ9or5hl0wgB7aK1Pi43Fu6wy5f14AZwytDJdiKTXvIqwj4tM9gpXmLCjK6uL6cXwm AcVgc9XmeP/AUOHOLnr/mHa2HxM+TAPquT8uR8b0nbARntv9H4U8LpfcDesqONQVF/Q== X-Received: by 2002:a17:902:9a07:: with SMTP id v7mr36430122plp.291.1557185452117; Mon, 06 May 2019 16:30:52 -0700 (PDT) X-Google-Smtp-Source: APXvYqzvOolTQs+aIQa4Dq7LNhFJ6q2IC0ysNVZdJPGxruBUesW5pStraMtolAz6DfRJSsMEj1WV X-Received: by 2002:a17:902:9a07:: with SMTP id v7mr36430047plp.291.1557185451176; Mon, 06 May 2019 16:30:51 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1557185451; cv=none; d=google.com; s=arc-20160816; b=qSte1FgnxcuSNT26//j8N8y/n1HxrXDrXdnoWUrayndauNZcakXVWm5YL/osUWQIIz KjCpqEcSpkTJzyg4+6zcnS7K9Vp9VTzFDjec0s5IAUep63zvlEAlxmNTtFUOL/NU620q k5ud5z48M7rR3Wo30Xqd3JvpJOSSewYxMLBXqm9tH5t031JYQYQNJWiSEWJPWmkeFUKe cKWlL8qV6J/qDXqKi3k8Xjr9fQJAL52+NIFzXs6m5t4D39ienBrwSyGgG775jVZ/slSV EwLttlEkgV8J+LYlwuWeoKDDOGTfe321rGWX5aFnPxr+pwo23bSu/rn2NidnI79FF+/t B0Xg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=dkim-signature:content-transfer-encoding:mime-version:references :in-reply-to:message-id:date:subject:cc:to:from; bh=5WlIjBgfvDyvCsXkm0NjNZ11WXFUwiQ4B+j31tyLuKg=; b=B/2yIHiX5mnqt2i0GCwCoNFyaixGlLsv99M0hwxh11887fHugVtaB/931EUsjyBeUg UcLMX8V3zenfbWtoCdRiCE36ccgEC8TEB6MAe5MXFTC5yZ0Px38w1z7J0S2k+wzY9lIW 6TjNDCjyt8E4BhAnlEiBXPQ6kwYKkVYFpq2xC6xf54ZzBEpdz5wqfftt7LJEtZl1Zmeo sRr4L6WkdKOGu7D3guR63lq3Vyc5oUNhvylhzVA7v3p68w6rQTp0ielo0JQxnloICmWE EBnWSYIEU8D6RSwYOaj5lyAkbtF5dORxoSUuRJjknG4u9K9FrQPkdbgdxOdVCT7gnrO1 14Sw== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@nvidia.com header.s=n1 header.b=KhdjRbMY; spf=pass (google.com: domain of rcampbell@nvidia.com designates 216.228.121.64 as permitted sender) smtp.mailfrom=rcampbell@nvidia.com; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=nvidia.com Received: from hqemgate15.nvidia.com (hqemgate15.nvidia.com. [216.228.121.64]) by mx.google.com with ESMTPS id f193si9204285pgc.144.2019.05.06.16.30.50 for (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Mon, 06 May 2019 16:30:51 -0700 (PDT) Received-SPF: pass (google.com: domain of rcampbell@nvidia.com designates 216.228.121.64 as permitted sender) client-ip=216.228.121.64; Authentication-Results: mx.google.com; dkim=pass header.i=@nvidia.com header.s=n1 header.b=KhdjRbMY; spf=pass (google.com: domain of rcampbell@nvidia.com designates 216.228.121.64 as permitted sender) smtp.mailfrom=rcampbell@nvidia.com; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=nvidia.com Received: from hqpgpgate101.nvidia.com (Not Verified[216.228.121.13]) by hqemgate15.nvidia.com (using TLS: TLSv1.2, DES-CBC3-SHA) id ; Mon, 06 May 2019 16:30:16 -0700 Received: from hqmail.nvidia.com ([172.20.161.6]) by hqpgpgate101.nvidia.com (PGP Universal service); Mon, 06 May 2019 16:30:50 -0700 X-PGP-Universal: processed; by hqpgpgate101.nvidia.com on Mon, 06 May 2019 16:30:50 -0700 Received: from rcampbell-dev.nvidia.com (172.20.13.39) by HQMAIL101.nvidia.com (172.20.187.10) with Microsoft SMTP Server (TLS) id 15.0.1473.3; Mon, 6 May 2019 23:30:50 +0000 From: To: CC: , Ralph Campbell , John Hubbard , Ira Weiny , Dan Williams , Arnd Bergmann , Balbir Singh , Dan Carpenter , Matthew Wilcox , Souptick Joarder , Andrew Morton Subject: [PATCH 3/5] mm/hmm: Use mm_get_hmm() in hmm_range_register() Date: Mon, 6 May 2019 16:29:40 -0700 Message-ID: <20190506232942.12623-4-rcampbell@nvidia.com> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20190506232942.12623-1-rcampbell@nvidia.com> References: <20190506232942.12623-1-rcampbell@nvidia.com> MIME-Version: 1.0 X-NVConfidentiality: public X-Originating-IP: [172.20.13.39] X-ClientProxiedBy: HQMAIL107.nvidia.com (172.20.187.13) To HQMAIL101.nvidia.com (172.20.187.10) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=nvidia.com; s=n1; t=1557185416; bh=5WlIjBgfvDyvCsXkm0NjNZ11WXFUwiQ4B+j31tyLuKg=; h=X-PGP-Universal:From:To:CC:Subject:Date:Message-ID:X-Mailer: In-Reply-To:References:MIME-Version:X-NVConfidentiality: X-Originating-IP:X-ClientProxiedBy:Content-Transfer-Encoding: Content-Type; b=KhdjRbMYg4rMUz7KfK9xJEDf8RwHRFmHw5yW1h4Jhf0lnVFlVigwpaMNLfTtjfrFx FSIoT52YDf6KQSTBJdQfBl0Y1SP60r51yon4cBXYK9TZYAcpO2GqaSkcFKpR3kteJr 9827Jc7XMXh9Mj3Brr3ZtP217RpbYYz1SQ0Nq9ZoiNGBmos3CU60FFaGS+avEFFpb5 cMxwubKqGavAbg5bTZ6Ioa2/Ov1aLGeAmVmwfEtyYXUDolypmiKZUxu+hA+nRIz0U8 jFTgs+iYsHmsF6Cr3/i1vWvMvEF6Q0erFwpi0nglHSYFRtvywldCLt2OpEHi64nDss c7GjIKJjQXBCQ== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: X-Virus-Scanned: ClamAV using ClamSMTP From: Ralph Campbell In hmm_range_register(), the call to hmm_get_or_create() implies that hmm_range_register() could be called before hmm_mirror_register() when in fact, that would violate the HMM API. Use mm_get_hmm() instead of hmm_get_or_create() to get the HMM structure. Signed-off-by: Ralph Campbell Cc: John Hubbard Cc: Ira Weiny Cc: Dan Williams Cc: Arnd Bergmann Cc: Balbir Singh Cc: Dan Carpenter Cc: Matthew Wilcox Cc: Souptick Joarder Cc: Andrew Morton --- mm/hmm.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/mm/hmm.c b/mm/hmm.c index f6c4c8633db9..2aa75dbed04a 100644 --- a/mm/hmm.c +++ b/mm/hmm.c @@ -936,7 +936,7 @@ int hmm_range_register(struct hmm_range *range, range->start = start; range->end = end; - range->hmm = hmm_get_or_create(mm); + range->hmm = mm_get_hmm(mm); if (!range->hmm) return -EFAULT; From patchwork Mon May 6 23:29:41 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ralph Campbell X-Patchwork-Id: 10932021 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id A09A4933 for ; Mon, 6 May 2019 23:30:58 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 8E5DC28820 for ; Mon, 6 May 2019 23:30:58 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 825842887B; Mon, 6 May 2019 23:30:58 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-3.0 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,MAILING_LIST_MULTI,RCVD_IN_DNSWL_NONE autolearn=ham version=3.3.1 Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 0B5F028820 for ; Mon, 6 May 2019 23:30:58 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 23C2A6B000A; Mon, 6 May 2019 19:30:54 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 19FF86B000C; Mon, 6 May 2019 19:30:54 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 08E716B000D; Mon, 6 May 2019 19:30:54 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from mail-pf1-f197.google.com (mail-pf1-f197.google.com [209.85.210.197]) by kanga.kvack.org (Postfix) with ESMTP id BF51F6B000A for ; Mon, 6 May 2019 19:30:53 -0400 (EDT) Received: by mail-pf1-f197.google.com with SMTP id t1so8896688pfa.10 for ; Mon, 06 May 2019 16:30:53 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding:dkim-signature; bh=7sEAG7BmE97PPZ9CLmnt92JqaAHK4dBbaboJKD5IMgE=; b=UDa6Snofm5LI9OLB4JznxPEnPGBfRqH9f6RyaxtTORh9hQYajDslSehVxtu+vNqMlc 11dIltVhf5/arUPIiJNgs0oN8RIiPeuDa47uJQ7pdvTg2zFzlTYwuVBMs8FYGrnHQSgS Z2T3GMzWFskvtaQdqWbT8pFJea1vj6NAKfmFo1HenwkT4Pv3uEmsSSAFIq0WRRoiBDuL f0E9pQfUDTUpwkFei23KzSKwVrWw0NZDauBa36A22Z4iJ8cBBVliSADTDeCJifGETRNx yonG1o5cp/DuRpiNI6w8rZx7EWJdWDp7OcyCv7FJJaBkZsmfkmaemDgqztSwPvKwP5g/ OxXg== X-Gm-Message-State: APjAAAVAWZQv2ePAUR7FylLgKnlWK+OhRjz+kPjCROvsD3CDLMAgnWhl 3+zZ7qVIqWoCZbBdgdMQZjdAfnKWWvJSDBxQQ2JVw11f/xDEOQwRdeg0SHVkkZAaLUttgCc1/aI TtN73HJudpXlOkuEYMH+2qzu9wfaoeXVmhSFfRMfHwHzzZ9R1/4Yn6exp89u+eh0evA== X-Received: by 2002:a65:51c8:: with SMTP id i8mr35389149pgq.175.1557185453437; Mon, 06 May 2019 16:30:53 -0700 (PDT) X-Google-Smtp-Source: APXvYqzg5qTfs7ImDRij7VBIXQfWOkcv3bqmHO7YJxwt6yCVz6wILXOcvUyscym3ORVGWCIFzaoJ X-Received: by 2002:a65:51c8:: with SMTP id i8mr35389072pgq.175.1557185452488; Mon, 06 May 2019 16:30:52 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1557185452; cv=none; d=google.com; s=arc-20160816; b=NsiwZURVZRjfAH/sh27AkrxFaNzXTr3ExkZ98Ir7Z50f2v/czaX8721bZqk1oA+aaM Mm9BpYtMpthFEu7r6WyGdiKhpLQyD9p2OXcG1hgO2tq1/Uscncg6IisDrJdtXhQ1POlq CDVoAFsXhSl4RoUZ9kj3lc67Wt+7NAnuha49SkraWxXHog7Ju8rnQj6YZhHr85tm/MaA C5H8Ty/PfXDeVKXvNiCvZKW3ck5q3ESBfQ6puxaoPYOfUtIDixQHb+WY4l78GOFBr+nh wkjzOc/qdpVHc6YvMKmZvgjSfego4CONtFKAXa1ZdYthnttVnbmOaKEMwlXDgLOGXe5U YEOQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=dkim-signature:content-transfer-encoding:mime-version:references :in-reply-to:message-id:date:subject:cc:to:from; bh=7sEAG7BmE97PPZ9CLmnt92JqaAHK4dBbaboJKD5IMgE=; b=cTYddpPDJlacNIscGpoKV4rVAEh7dwVcqE0GbC7pOJn1qS3pxfNsZngO3zHzZLR2zj iz9vP/3FePpLWekBKSs5CHqSPaKT0qHEh/hKAM5LcDB3RHE79RUsXvIiEm+E6AL3gXud Lu7/Ot5nXCrTUSnCo/AqoFnAC0uRJfjOyb34h30bomamdyxnwtbA/rHNzzoG9/s7Q6TI 3YmTxcnnIwenCFbkEgl04sw9wlMoBBLboxh0i/8tNjE8wFEe1bVTprPJYdA6o0AvIW5K qWjzXUFmv7hYCnJgxR7MQKdhcpkgaZbHZn/ngHUYNN6pJK7kZ9x0uwuvcTBxAwXT8CpP b9Qg== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@nvidia.com header.s=n1 header.b=U4L0ZESX; spf=pass (google.com: domain of rcampbell@nvidia.com designates 216.228.121.65 as permitted sender) smtp.mailfrom=rcampbell@nvidia.com; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=nvidia.com Received: from hqemgate16.nvidia.com (hqemgate16.nvidia.com. [216.228.121.65]) by mx.google.com with ESMTPS id v10si2561674pfe.87.2019.05.06.16.30.52 for (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Mon, 06 May 2019 16:30:52 -0700 (PDT) Received-SPF: pass (google.com: domain of rcampbell@nvidia.com designates 216.228.121.65 as permitted sender) client-ip=216.228.121.65; Authentication-Results: mx.google.com; dkim=pass header.i=@nvidia.com header.s=n1 header.b=U4L0ZESX; spf=pass (google.com: domain of rcampbell@nvidia.com designates 216.228.121.65 as permitted sender) smtp.mailfrom=rcampbell@nvidia.com; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=nvidia.com Received: from hqpgpgate102.nvidia.com (Not Verified[216.228.121.13]) by hqemgate16.nvidia.com (using TLS: TLSv1.2, DES-CBC3-SHA) id ; Mon, 06 May 2019 16:30:48 -0700 Received: from hqmail.nvidia.com ([172.20.161.6]) by hqpgpgate102.nvidia.com (PGP Universal service); Mon, 06 May 2019 16:30:52 -0700 X-PGP-Universal: processed; by hqpgpgate102.nvidia.com on Mon, 06 May 2019 16:30:52 -0700 Received: from rcampbell-dev.nvidia.com (172.20.13.39) by HQMAIL101.nvidia.com (172.20.187.10) with Microsoft SMTP Server (TLS) id 15.0.1473.3; Mon, 6 May 2019 23:30:51 +0000 From: To: CC: , Ralph Campbell , John Hubbard , Ira Weiny , Dan Williams , Arnd Bergmann , Balbir Singh , Dan Carpenter , Matthew Wilcox , Souptick Joarder , Andrew Morton Subject: [PATCH 4/5] mm/hmm: hmm_vma_fault() doesn't always call hmm_range_unregister() Date: Mon, 6 May 2019 16:29:41 -0700 Message-ID: <20190506232942.12623-5-rcampbell@nvidia.com> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20190506232942.12623-1-rcampbell@nvidia.com> References: <20190506232942.12623-1-rcampbell@nvidia.com> MIME-Version: 1.0 X-NVConfidentiality: public X-Originating-IP: [172.20.13.39] X-ClientProxiedBy: HQMAIL107.nvidia.com (172.20.187.13) To HQMAIL101.nvidia.com (172.20.187.10) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=nvidia.com; s=n1; t=1557185448; bh=7sEAG7BmE97PPZ9CLmnt92JqaAHK4dBbaboJKD5IMgE=; h=X-PGP-Universal:From:To:CC:Subject:Date:Message-ID:X-Mailer: In-Reply-To:References:MIME-Version:X-NVConfidentiality: X-Originating-IP:X-ClientProxiedBy:Content-Transfer-Encoding: Content-Type; b=U4L0ZESXdXyu4HWeu/nKyyIqj74DonoGBT6SFnyPRAxU2VVbgRApkhErj1Q1lHVhn hDKrjw8OaE+4TPmGG7K+tvEa0IPgVTVALNe10MY6RAMneTRUExBPAa9QTbcSXgjawc YsETYgttxPSIe5L+FFEiF/0y/82hiXtBq78QwYYYkVXC0McXLmVwZXXAIkAsozuDto U4+R4b0iXWXeBU/tcZFJoxYXX7qb+74B6xFTqzHr2cOTdo37eAj4WGbYL2vcvUazhl 7hehU8zjzQyFMHe1OtBiBzHA74ngo4oL5rgwjQ5tply3Wwok7XgXIWx1tDz48a2D1b hOAXoOVT2c3PQ== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: X-Virus-Scanned: ClamAV using ClamSMTP From: Ralph Campbell The helper function hmm_vma_fault() calls hmm_range_register() but is missing a call to hmm_range_unregister() in one of the error paths. This leads to a reference count leak and ultimately a memory leak on struct hmm. Always call hmm_range_unregister() if hmm_range_register() succeeded. Signed-off-by: Ralph Campbell Cc: John Hubbard Cc: Ira Weiny Cc: Dan Williams Cc: Arnd Bergmann Cc: Balbir Singh Cc: Dan Carpenter Cc: Matthew Wilcox Cc: Souptick Joarder Cc: Andrew Morton --- include/linux/hmm.h | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-) diff --git a/include/linux/hmm.h b/include/linux/hmm.h index 35a429621e1e..fa0671d67269 100644 --- a/include/linux/hmm.h +++ b/include/linux/hmm.h @@ -559,6 +559,7 @@ static inline int hmm_vma_fault(struct hmm_range *range, bool block) return (int)ret; if (!hmm_range_wait_until_valid(range, HMM_RANGE_DEFAULT_TIMEOUT)) { + hmm_range_unregister(range); /* * The mmap_sem was taken by driver we release it here and * returns -EAGAIN which correspond to mmap_sem have been @@ -570,13 +571,13 @@ static inline int hmm_vma_fault(struct hmm_range *range, bool block) ret = hmm_range_fault(range, block); if (ret <= 0) { + hmm_range_unregister(range); if (ret == -EBUSY || !ret) { /* Same as above, drop mmap_sem to match old API. */ up_read(&range->vma->vm_mm->mmap_sem); ret = -EBUSY; } else if (ret == -EAGAIN) ret = -EBUSY; - hmm_range_unregister(range); return ret; } return 0; From patchwork Mon May 6 23:35:14 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ralph Campbell X-Patchwork-Id: 10932023 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 408E9933 for ; Mon, 6 May 2019 23:35:30 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 241C5288BF for ; Mon, 6 May 2019 23:35:30 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 11A0C288EA; Mon, 6 May 2019 23:35:30 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-3.0 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,MAILING_LIST_MULTI,RCVD_IN_DNSWL_NONE autolearn=ham version=3.3.1 Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 2E3A1288BF for ; Mon, 6 May 2019 23:35:29 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 482A46B0005; Mon, 6 May 2019 19:35:28 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 4323E6B0006; Mon, 6 May 2019 19:35:28 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 2D52F6B0007; Mon, 6 May 2019 19:35:28 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from mail-pf1-f197.google.com (mail-pf1-f197.google.com [209.85.210.197]) by kanga.kvack.org (Postfix) with ESMTP id E63EB6B0005 for ; Mon, 6 May 2019 19:35:27 -0400 (EDT) Received: by mail-pf1-f197.google.com with SMTP id g11so577183pfq.7 for ; Mon, 06 May 2019 16:35:27 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:mime-version :content-transfer-encoding:dkim-signature; bh=Pz+Lnilo8jwREPLLjDIfQshwvwWMrvNGLPYsN6aFmxg=; b=OdWEKB/1zEZS6YXnDoj9shc8OKmqjwbCeI1dfLJQH07lKSGuexqKCwFOaF/a5PHKbL 3MXzjWp+2JBWV0/gt1jinSOem5zbrpWUK5rYm0kyX5iUvBY8pGNikF0a0T2j31tvTtTW S2SrMjWPWhTsx3cBFLJY2Fy7iHPia4pwrxyWJT3KkS2yGw+eF7pM2UHUU3QhOJhwrKTL ++ZDGQc2f0m9iZ1nfprEqFix9WTkkDyuVtkDIl3a6s5yTDHdjdNqLval4FUOQLZJx4id J7dQrlnlRMzddEJEiPvrFPVe6sAXEubTcdwtv7lTxbp00GQOSeiyF7MkKVIa4qMLjTFM G8rw== X-Gm-Message-State: APjAAAWDk5Z2l4PLS/Wd5TsaPOB1zEJkWZT6RJ1/6/FSdoa+tGB40BsV NwnUyng8uEgyeKyPJMPO88tdNhhUcuVC43YKsszDA142HCEuKoLaWc7KVVGRr9rY1oljrL+Z+Dj 2JG8wbgoJDr+YRe5Mtdhln6wXv5aRaDbAyzoLOfgLp25UXICCGizPMSnd+g/hPwa46g== X-Received: by 2002:a17:902:8ec6:: with SMTP id x6mr35171596plo.123.1557185727560; Mon, 06 May 2019 16:35:27 -0700 (PDT) X-Google-Smtp-Source: APXvYqzdJAVDKOPOvTM0BZlvHVq9D+gGYCEqHxvLXwbrjO91eJyjMLO60YHoPA+fIHcbHrUUrsXW X-Received: by 2002:a17:902:8ec6:: with SMTP id x6mr35171539plo.123.1557185726761; Mon, 06 May 2019 16:35:26 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1557185726; cv=none; d=google.com; s=arc-20160816; b=r5JgAXOzmvokjQpGjJa7xv/YdMCk7XkwOhJc8+WMeCbDVVOiPamdB5es2dPvn9SUQY R12X8HdmQP0QMfuhV1XSnp6x1SGedOey3rwcgumBEQ8N6kVurshGuktRt+qjto1GodtV KMMwNNwzgkv9XDyXD0+U5bw//bEfHP/Quay0RiF9+zwjoprmXcwT4/BhC+GY2IfhpqXA 7bQ7DJxblsGfedXPmWl3ISyyDKGa7hs/x+QhwKbjnOtU62qSfgyqOxisTYaOm6XtpW8N tvnlEMRbU414tj8ZgHQuzfSfyaji+NzdFj5zhdFhzCW8NSDlu/rhk+debFi0NAy2GX1q PR+w== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=dkim-signature:content-transfer-encoding:mime-version:message-id :date:subject:cc:to:from; bh=Pz+Lnilo8jwREPLLjDIfQshwvwWMrvNGLPYsN6aFmxg=; b=p41AboLZliDwSGSmFpPASeCfe+4xByTpKP8SXW/tRSyKvaX/Vq/WcVkST/9DpDMLJc 9MmVNeb1q5bxAStin4P5J0i4ZMcF6/BBdNqVb26RN+cjVmsZSB/qjfNt3L88vv6OA4yI iM3myiv9rfwLVX3TeNqVnr9CWh/mhm+B3eY6qATmvoKK2qbyCL9tXz+eymwk79YWQQxo 3W8mFcrovB0+yoMA1Mo1ERSog7qYTpr592VABn9LTASxMC18UVxNuMpVwwHx83pm3PiN XN0UHdi7d7UOpOUB+DGKRC7XGEzyqokjpNjlfNPBctclN4y9DkmcAVnVqMNLyfOFcEgC S7ww== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@nvidia.com header.s=n1 header.b=nSpPfFNU; spf=pass (google.com: domain of rcampbell@nvidia.com designates 216.228.121.64 as permitted sender) smtp.mailfrom=rcampbell@nvidia.com; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=nvidia.com Received: from hqemgate15.nvidia.com (hqemgate15.nvidia.com. [216.228.121.64]) by mx.google.com with ESMTPS id q11si15746794pgv.373.2019.05.06.16.35.26 for (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Mon, 06 May 2019 16:35:26 -0700 (PDT) Received-SPF: pass (google.com: domain of rcampbell@nvidia.com designates 216.228.121.64 as permitted sender) client-ip=216.228.121.64; Authentication-Results: mx.google.com; dkim=pass header.i=@nvidia.com header.s=n1 header.b=nSpPfFNU; spf=pass (google.com: domain of rcampbell@nvidia.com designates 216.228.121.64 as permitted sender) smtp.mailfrom=rcampbell@nvidia.com; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=nvidia.com Received: from hqpgpgate102.nvidia.com (Not Verified[216.228.121.13]) by hqemgate15.nvidia.com (using TLS: TLSv1.2, DES-CBC3-SHA) id ; Mon, 06 May 2019 16:34:51 -0700 Received: from hqmail.nvidia.com ([172.20.161.6]) by hqpgpgate102.nvidia.com (PGP Universal service); Mon, 06 May 2019 16:35:26 -0700 X-PGP-Universal: processed; by hqpgpgate102.nvidia.com on Mon, 06 May 2019 16:35:26 -0700 Received: from rcampbell-dev.nvidia.com (172.20.13.39) by HQMAIL101.nvidia.com (172.20.187.10) with Microsoft SMTP Server (TLS) id 15.0.1473.3; Mon, 6 May 2019 23:35:25 +0000 From: To: CC: , Ralph Campbell , John Hubbard , Ira Weiny , Dan Williams , Arnd Bergmann , Balbir Singh , Dan Carpenter , Matthew Wilcox , Souptick Joarder , Andrew Morton Subject: [PATCH 5/5] mm/hmm: Fix mm stale reference use in hmm_free() Date: Mon, 6 May 2019 16:35:14 -0700 Message-ID: <20190506233514.12795-1-rcampbell@nvidia.com> X-Mailer: git-send-email 2.20.1 MIME-Version: 1.0 X-NVConfidentiality: public X-Originating-IP: [172.20.13.39] X-ClientProxiedBy: HQMAIL107.nvidia.com (172.20.187.13) To HQMAIL101.nvidia.com (172.20.187.10) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=nvidia.com; s=n1; t=1557185692; bh=Pz+Lnilo8jwREPLLjDIfQshwvwWMrvNGLPYsN6aFmxg=; h=X-PGP-Universal:From:To:CC:Subject:Date:Message-ID:X-Mailer: MIME-Version:X-NVConfidentiality:X-Originating-IP: X-ClientProxiedBy:Content-Transfer-Encoding:Content-Type; b=nSpPfFNUVG2DSiJcoI6i99LQZXi4BGjgIxH3xOgnSqX+B9nKq56UTkGLtTWOycSiw hZc+SfwMhoRm9UoNFR1dG5p2O4/zT8jWOgc9hyShq0S3TzuWX/jqutQhFdotZc8xbK Eapq7ctsmHsdhFGkHFSkRHI3M9jaC7OVHsamMQTRtf2TU2ARDVQ6Uxpt3vvkoB16m8 MwJwPP9wVm2+yQPcvS/egojLEgkS62VUtjO4T3Pbz6kaT87mL/s9GVF2CKjrIKWPjH zyqaDfFTY6dPpktY3+rBl+9LqcDMtzvaBHYgB2DhH4tWc7jc4WWSHPoFjkEvwlDcah WVOZ+x+vXPPFg== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: X-Virus-Scanned: ClamAV using ClamSMTP From: Ralph Campbell The last reference to struct hmm may be released long after the mm_struct is destroyed because the struct hmm_mirror memory may be part of a device driver open file private data pointer. The file descriptor close is usually after the mm_struct is destroyed in do_exit(). This is a good reason for making struct hmm a kref_t object [1] since its lifetime spans the life time of mm_struct and struct hmm_mirror. The fix is to not use hmm->mm in hmm_free() and to clear mm->hmm and hmm->mm pointers in hmm_destroy() when the mm_struct is destroyed. By clearing the pointers at the very last moment, it eliminates the need for additional locking since the mmu notifier code already handles quiescing notifier callbacks and unregistering the hmm notifiers. Also, by making mm_struct hold a reference to struct hmm, there is no need to check for a zero hmm reference count in mm_get_hmm(). [1] https://marc.info/?l=linux-mm&m=155432001406049&w=2 ("mm/hmm: use reference counting for HMM struct v3") Signed-off-by: Ralph Campbell Cc: John Hubbard Cc: Ira Weiny Cc: Dan Williams Cc: Arnd Bergmann Cc: Balbir Singh Cc: Dan Carpenter Cc: Matthew Wilcox Cc: Souptick Joarder Cc: Andrew Morton --- include/linux/hmm.h | 10 +---- mm/hmm.c | 100 ++++++++++++++++---------------------------- 2 files changed, 37 insertions(+), 73 deletions(-) diff --git a/include/linux/hmm.h b/include/linux/hmm.h index fa0671d67269..538867c76906 100644 --- a/include/linux/hmm.h +++ b/include/linux/hmm.h @@ -488,15 +488,7 @@ void hmm_mirror_unregister(struct hmm_mirror *mirror); */ static inline bool hmm_mirror_mm_is_alive(struct hmm_mirror *mirror) { - struct mm_struct *mm; - - if (!mirror || !mirror->hmm) - return false; - mm = READ_ONCE(mirror->hmm->mm); - if (mirror->hmm->dead || !mm) - return false; - - return true; + return mirror && mirror->hmm && !mirror->hmm->dead; } /* diff --git a/mm/hmm.c b/mm/hmm.c index 2aa75dbed04a..4e42c282d334 100644 --- a/mm/hmm.c +++ b/mm/hmm.c @@ -43,8 +43,10 @@ static inline struct hmm *mm_get_hmm(struct mm_struct *mm) { struct hmm *hmm = READ_ONCE(mm->hmm); - if (hmm && kref_get_unless_zero(&hmm->kref)) + if (hmm && !hmm->dead) { + kref_get(&hmm->kref); return hmm; + } return NULL; } @@ -53,25 +55,28 @@ static inline struct hmm *mm_get_hmm(struct mm_struct *mm) * hmm_get_or_create - register HMM against an mm (HMM internal) * * @mm: mm struct to attach to - * Returns: returns an HMM object, either by referencing the existing - * (per-process) object, or by creating a new one. + * Return: an HMM object reference, either by referencing the existing + * (per-process) object, or by creating a new one. * - * This is not intended to be used directly by device drivers. If mm already - * has an HMM struct then it get a reference on it and returns it. Otherwise - * it allocates an HMM struct, initializes it, associate it with the mm and - * returns it. + * If the mm already has an HMM struct then return a new reference to it. + * Otherwise, allocate an HMM struct, initialize it, associate it with the mm, + * and return a new reference to it. If the return value is not NULL, + * the caller is responsible for calling hmm_put(). */ static struct hmm *hmm_get_or_create(struct mm_struct *mm) { - struct hmm *hmm = mm_get_hmm(mm); - bool cleanup = false; + struct hmm *hmm = mm->hmm; - if (hmm) - return hmm; + if (hmm) { + if (hmm->dead) + goto error; + goto out; + } hmm = kmalloc(sizeof(*hmm), GFP_KERNEL); if (!hmm) - return NULL; + goto error; + init_waitqueue_head(&hmm->wq); INIT_LIST_HEAD(&hmm->mirrors); init_rwsem(&hmm->mirrors_sem); @@ -83,47 +88,32 @@ static struct hmm *hmm_get_or_create(struct mm_struct *mm) hmm->dead = false; hmm->mm = mm; - spin_lock(&mm->page_table_lock); - if (!mm->hmm) - mm->hmm = hmm; - else - cleanup = true; - spin_unlock(&mm->page_table_lock); - - if (cleanup) - goto error; - /* - * We should only get here if hold the mmap_sem in write mode ie on - * registration of first mirror through hmm_mirror_register() + * The mmap_sem should be held for write so no additional locking + * is needed. Note that struct_mm holds a reference to hmm. + * It is cleared in hmm_release(). */ + mm->hmm = hmm; + hmm->mmu_notifier.ops = &hmm_mmu_notifier_ops; if (__mmu_notifier_register(&hmm->mmu_notifier, mm)) goto error_mm; +out: + /* Return a separate hmm reference for the caller. */ + kref_get(&hmm->kref); return hmm; error_mm: - spin_lock(&mm->page_table_lock); - if (mm->hmm == hmm) - mm->hmm = NULL; - spin_unlock(&mm->page_table_lock); -error: + mm->hmm = NULL; kfree(hmm); +error: return NULL; } static void hmm_free(struct kref *kref) { struct hmm *hmm = container_of(kref, struct hmm, kref); - struct mm_struct *mm = hmm->mm; - - mmu_notifier_unregister_no_release(&hmm->mmu_notifier, mm); - - spin_lock(&mm->page_table_lock); - if (mm->hmm == hmm) - mm->hmm = NULL; - spin_unlock(&mm->page_table_lock); kfree(hmm); } @@ -135,25 +125,18 @@ static inline void hmm_put(struct hmm *hmm) void hmm_mm_destroy(struct mm_struct *mm) { - struct hmm *hmm; + struct hmm *hmm = mm->hmm; - spin_lock(&mm->page_table_lock); - hmm = mm_get_hmm(mm); - mm->hmm = NULL; if (hmm) { + mm->hmm = NULL; hmm->mm = NULL; - hmm->dead = true; - spin_unlock(&mm->page_table_lock); hmm_put(hmm); - return; } - - spin_unlock(&mm->page_table_lock); } static void hmm_release(struct mmu_notifier *mn, struct mm_struct *mm) { - struct hmm *hmm = mm_get_hmm(mm); + struct hmm *hmm = mm->hmm; struct hmm_mirror *mirror; struct hmm_range *range; @@ -187,14 +170,12 @@ static void hmm_release(struct mmu_notifier *mn, struct mm_struct *mm) struct hmm_mirror, list); } up_write(&hmm->mirrors_sem); - - hmm_put(hmm); } static int hmm_invalidate_range_start(struct mmu_notifier *mn, const struct mmu_notifier_range *nrange) { - struct hmm *hmm = mm_get_hmm(nrange->mm); + struct hmm *hmm = nrange->mm->hmm; struct hmm_mirror *mirror; struct hmm_update update; struct hmm_range *range; @@ -238,14 +219,13 @@ static int hmm_invalidate_range_start(struct mmu_notifier *mn, up_read(&hmm->mirrors_sem); out: - hmm_put(hmm); return ret; } static void hmm_invalidate_range_end(struct mmu_notifier *mn, const struct mmu_notifier_range *nrange) { - struct hmm *hmm = mm_get_hmm(nrange->mm); + struct hmm *hmm = nrange->mm->hmm; VM_BUG_ON(!hmm); @@ -262,8 +242,6 @@ static void hmm_invalidate_range_end(struct mmu_notifier *mn, wake_up_all(&hmm->wq); } mutex_unlock(&hmm->lock); - - hmm_put(hmm); } static const struct mmu_notifier_ops hmm_mmu_notifier_ops = { @@ -931,20 +909,14 @@ int hmm_range_register(struct hmm_range *range, return -EINVAL; if (start >= end) return -EINVAL; + hmm = mm_get_hmm(mm); + if (!hmm) + return -EFAULT; range->page_shift = page_shift; range->start = start; range->end = end; - - range->hmm = mm_get_hmm(mm); - if (!range->hmm) - return -EFAULT; - - /* Check if hmm_mm_destroy() was call. */ - if (range->hmm->mm == NULL || range->hmm->dead) { - hmm_put(range->hmm); - return -EFAULT; - } + range->hmm = hmm; /* Initialize range to track CPU page table updates. */ mutex_lock(&range->hmm->lock);