From patchwork Mon Feb 11 22:44:33 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Daniel Jordan X-Patchwork-Id: 10806911 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 6530B13BF for ; Mon, 11 Feb 2019 22:46:51 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 556312B38A for ; Mon, 11 Feb 2019 22:46:51 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 481882B3B8; Mon, 11 Feb 2019 22:46:51 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-8.0 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,MAILING_LIST_MULTI,RCVD_IN_DNSWL_HI, UNPARSEABLE_RELAY autolearn=unavailable version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id DA4C02B38A for ; Mon, 11 Feb 2019 22:46:50 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727669AbfBKWqX (ORCPT ); Mon, 11 Feb 2019 17:46:23 -0500 Received: from userp2120.oracle.com ([156.151.31.85]:50146 "EHLO userp2120.oracle.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727652AbfBKWqX (ORCPT ); Mon, 11 Feb 2019 17:46:23 -0500 Received: from pps.filterd (userp2120.oracle.com [127.0.0.1]) by userp2120.oracle.com (8.16.0.27/8.16.0.27) with SMTP id x1BMhgq5079616; Mon, 11 Feb 2019 22:44:54 GMT DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=oracle.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding; s=corp-2018-07-02; bh=lNNVFPUWTRhNE8iqw8ALNGW4x7tdbqFw7nSGCVoIqbc=; b=HnEn0xS6VXXZwacMqdL+G2YTsezKd8bC2zALO+fMvwXK7yNNIdx2SEyBXHAQcWr0hWPH xFL1cpGYlZeIGxFVuwVk/blwtPCZ/qitfP3s2/AqwfDM7MldOF34fKElHAoscHbHzDEH NODKcW8CH40UlUuWcQ6O74uctfb+VX8qmi4HJA0ssRmRYpS9HjzCYbajNJ6hdrAmp035 ZRa4K70w/oRxK1O9L9EPCguGCbkC4mB5dk64JhtD56TcT6Ms831Diow9RBQ6Hm4Vw6hH BLdOK7jjHdd+EYP7nShsNbLdmOnWIbNMT2dKOItdbfzrRDlPRyguA5RQ35ZnmHE/mJlt jg== Received: from userv0022.oracle.com (userv0022.oracle.com [156.151.31.74]) by userp2120.oracle.com with ESMTP id 2qhredrqvq-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK); Mon, 11 Feb 2019 22:44:54 +0000 Received: from aserv0122.oracle.com (aserv0122.oracle.com [141.146.126.236]) by userv0022.oracle.com (8.14.4/8.14.4) with ESMTP id x1BMim4E030846 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK); Mon, 11 Feb 2019 22:44:48 GMT Received: from abhmp0022.oracle.com (abhmp0022.oracle.com [141.146.116.28]) by aserv0122.oracle.com (8.14.4/8.14.4) with ESMTP id x1BMik3u013837; Mon, 11 Feb 2019 22:44:46 GMT Received: from localhost.localdomain (/73.60.114.248) by default (Oracle Beehive Gateway v4.0) with ESMTP ; Mon, 11 Feb 2019 14:44:45 -0800 From: Daniel Jordan To: jgg@ziepe.ca Cc: akpm@linux-foundation.org, dave@stgolabs.net, jack@suse.cz, cl@linux.com, linux-mm@kvack.org, kvm@vger.kernel.org, kvm-ppc@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, linux-fpga@vger.kernel.org, linux-kernel@vger.kernel.org, alex.williamson@redhat.com, paulus@ozlabs.org, benh@kernel.crashing.org, mpe@ellerman.id.au, hao.wu@intel.com, atull@kernel.org, mdf@kernel.org, aik@ozlabs.ru, daniel.m.jordan@oracle.com Subject: [PATCH 1/5] vfio/type1: use pinned_vm instead of locked_vm to account pinned pages Date: Mon, 11 Feb 2019 17:44:33 -0500 Message-Id: <20190211224437.25267-2-daniel.m.jordan@oracle.com> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20190211224437.25267-1-daniel.m.jordan@oracle.com> References: <20190211224437.25267-1-daniel.m.jordan@oracle.com> MIME-Version: 1.0 X-Proofpoint-Virus-Version: vendor=nai engine=5900 definitions=9164 signatures=668683 X-Proofpoint-Spam-Details: rule=notspam policy=default score=0 priorityscore=1501 malwarescore=0 suspectscore=1 phishscore=0 bulkscore=0 spamscore=0 clxscore=1015 lowpriorityscore=0 mlxscore=0 impostorscore=0 mlxlogscore=999 adultscore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.0.1-1810050000 definitions=main-1902110162 Sender: linux-fpga-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-fpga@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP Beginning with bc3e53f682d9 ("mm: distinguish between mlocked and pinned pages"), locked and pinned pages are accounted separately. Type1 accounts pinned pages to locked_vm; use pinned_vm instead. pinned_vm recently became atomic and so no longer relies on mmap_sem held as writer: delete. Signed-off-by: Daniel Jordan --- drivers/vfio/vfio_iommu_type1.c | 31 ++++++++++++------------------- 1 file changed, 12 insertions(+), 19 deletions(-) diff --git a/drivers/vfio/vfio_iommu_type1.c b/drivers/vfio/vfio_iommu_type1.c index 73652e21efec..a56cc341813f 100644 --- a/drivers/vfio/vfio_iommu_type1.c +++ b/drivers/vfio/vfio_iommu_type1.c @@ -257,7 +257,8 @@ static int vfio_iova_put_vfio_pfn(struct vfio_dma *dma, struct vfio_pfn *vpfn) static int vfio_lock_acct(struct vfio_dma *dma, long npage, bool async) { struct mm_struct *mm; - int ret; + s64 pinned_vm; + int ret = 0; if (!npage) return 0; @@ -266,24 +267,15 @@ static int vfio_lock_acct(struct vfio_dma *dma, long npage, bool async) if (!mm) return -ESRCH; /* process exited */ - ret = down_write_killable(&mm->mmap_sem); - if (!ret) { - if (npage > 0) { - if (!dma->lock_cap) { - unsigned long limit; - - limit = task_rlimit(dma->task, - RLIMIT_MEMLOCK) >> PAGE_SHIFT; + pinned_vm = atomic64_add_return(npage, &mm->pinned_vm); - if (mm->locked_vm + npage > limit) - ret = -ENOMEM; - } + if (npage > 0 && !dma->lock_cap) { + unsigned long limit = task_rlimit(dma->task, RLIMIT_MEMLOCK) >> + PAGE_SHIFT; + if (pinned_vm > limit) { + atomic64_sub(npage, &mm->pinned_vm); + ret = -ENOMEM; } - - if (!ret) - mm->locked_vm += npage; - - up_write(&mm->mmap_sem); } if (async) @@ -401,6 +393,7 @@ static long vfio_pin_pages_remote(struct vfio_dma *dma, unsigned long vaddr, long ret, pinned = 0, lock_acct = 0; bool rsvd; dma_addr_t iova = vaddr - dma->vaddr + dma->iova; + atomic64_t *pinned_vm = ¤t->mm->pinned_vm; /* This code path is only user initiated */ if (!current->mm) @@ -418,7 +411,7 @@ static long vfio_pin_pages_remote(struct vfio_dma *dma, unsigned long vaddr, * pages are already counted against the user. */ if (!rsvd && !vfio_find_vpfn(dma, iova)) { - if (!dma->lock_cap && current->mm->locked_vm + 1 > limit) { + if (!dma->lock_cap && atomic64_read(pinned_vm) + 1 > limit) { put_pfn(*pfn_base, dma->prot); pr_warn("%s: RLIMIT_MEMLOCK (%ld) exceeded\n", __func__, limit << PAGE_SHIFT); @@ -445,7 +438,7 @@ static long vfio_pin_pages_remote(struct vfio_dma *dma, unsigned long vaddr, if (!rsvd && !vfio_find_vpfn(dma, iova)) { if (!dma->lock_cap && - current->mm->locked_vm + lock_acct + 1 > limit) { + atomic64_read(pinned_vm) + lock_acct + 1 > limit) { put_pfn(pfn, dma->prot); pr_warn("%s: RLIMIT_MEMLOCK (%ld) exceeded\n", __func__, limit << PAGE_SHIFT); From patchwork Mon Feb 11 22:44:34 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Daniel Jordan X-Patchwork-Id: 10806885 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 0079313BF for ; Mon, 11 Feb 2019 22:46:14 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id E55032B26B for ; Mon, 11 Feb 2019 22:46:13 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id D96432B39C; Mon, 11 Feb 2019 22:46:13 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-8.0 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,MAILING_LIST_MULTI,RCVD_IN_DNSWL_HI, UNPARSEABLE_RELAY autolearn=unavailable version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 2D3942B26B for ; Mon, 11 Feb 2019 22:46:13 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727006AbfBKWqM (ORCPT ); Mon, 11 Feb 2019 17:46:12 -0500 Received: from aserp2130.oracle.com ([141.146.126.79]:38446 "EHLO aserp2130.oracle.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726979AbfBKWqM (ORCPT ); Mon, 11 Feb 2019 17:46:12 -0500 Received: from pps.filterd (aserp2130.oracle.com [127.0.0.1]) by aserp2130.oracle.com (8.16.0.27/8.16.0.27) with SMTP id x1BMhfbR080591; Mon, 11 Feb 2019 22:44:50 GMT DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=oracle.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding; s=corp-2018-07-02; bh=5o86+WlRzT0Hhn4QJL5kLu5bSJTU1vffRIZ0ttnB1EA=; b=MlSZBhMCsIFCTUo0EIPL0tPnswkHaSGhLsLeLDsJtU0vCNa46hIRRwnWmZSJMf688QxF YfShCZB8rrfpNQU2rjW9bZ6NE1kq9Zeczsj2tyi+FNtwLEbk799j8ZAiasLuzHoOfl0W Xo3MyqoH9tyXpg/1cG6wwMG/TLaUZDVevzLwdHfp/LuFIbDLtYidG+ReUKsulAaalIVf YqH2t7Wtxgv/iFnMIPqorbKWsfOUBzIF/Uou4/FQJ3509CoaV/FXjsZN3CaF8wE/LKVE qDr9sCOeKQBYER0qMM6Cvq63erEgMnEp1448hqV7AaO20jWtZMt40cDFVkI46fDjJFjR vA== Received: from userv0022.oracle.com (userv0022.oracle.com [156.151.31.74]) by aserp2130.oracle.com with ESMTP id 2qhre58p9n-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK); Mon, 11 Feb 2019 22:44:50 +0000 Received: from userv0122.oracle.com (userv0122.oracle.com [156.151.31.75]) by userv0022.oracle.com (8.14.4/8.14.4) with ESMTP id x1BMin5P030881 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK); Mon, 11 Feb 2019 22:44:49 GMT Received: from abhmp0022.oracle.com (abhmp0022.oracle.com [141.146.116.28]) by userv0122.oracle.com (8.14.4/8.14.4) with ESMTP id x1BMimLO006595; Mon, 11 Feb 2019 22:44:48 GMT Received: from localhost.localdomain (/73.60.114.248) by default (Oracle Beehive Gateway v4.0) with ESMTP ; Mon, 11 Feb 2019 14:44:47 -0800 From: Daniel Jordan To: jgg@ziepe.ca Cc: akpm@linux-foundation.org, dave@stgolabs.net, jack@suse.cz, cl@linux.com, linux-mm@kvack.org, kvm@vger.kernel.org, kvm-ppc@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, linux-fpga@vger.kernel.org, linux-kernel@vger.kernel.org, alex.williamson@redhat.com, paulus@ozlabs.org, benh@kernel.crashing.org, mpe@ellerman.id.au, hao.wu@intel.com, atull@kernel.org, mdf@kernel.org, aik@ozlabs.ru, daniel.m.jordan@oracle.com Subject: [PATCH 2/5] vfio/spapr_tce: use pinned_vm instead of locked_vm to account pinned pages Date: Mon, 11 Feb 2019 17:44:34 -0500 Message-Id: <20190211224437.25267-3-daniel.m.jordan@oracle.com> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20190211224437.25267-1-daniel.m.jordan@oracle.com> References: <20190211224437.25267-1-daniel.m.jordan@oracle.com> MIME-Version: 1.0 X-Proofpoint-Virus-Version: vendor=nai engine=5900 definitions=9164 signatures=668683 X-Proofpoint-Spam-Details: rule=notspam policy=default score=0 priorityscore=1501 malwarescore=0 suspectscore=3 phishscore=0 bulkscore=0 spamscore=0 clxscore=1015 lowpriorityscore=0 mlxscore=0 impostorscore=0 mlxlogscore=999 adultscore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.0.1-1810050000 definitions=main-1902110162 Sender: linux-fpga-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-fpga@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP Beginning with bc3e53f682d9 ("mm: distinguish between mlocked and pinned pages"), locked and pinned pages are accounted separately. The SPAPR TCE VFIO IOMMU driver accounts pinned pages to locked_vm; use pinned_vm instead. pinned_vm recently became atomic and so no longer relies on mmap_sem held as writer: delete. Signed-off-by: Daniel Jordan --- Documentation/vfio.txt | 6 +-- drivers/vfio/vfio_iommu_spapr_tce.c | 64 ++++++++++++++--------------- 2 files changed, 33 insertions(+), 37 deletions(-) diff --git a/Documentation/vfio.txt b/Documentation/vfio.txt index f1a4d3c3ba0b..fa37d65363f9 100644 --- a/Documentation/vfio.txt +++ b/Documentation/vfio.txt @@ -308,7 +308,7 @@ This implementation has some specifics: currently there is no way to reduce the number of calls. In order to make things faster, the map/unmap handling has been implemented in real mode which provides an excellent performance which has limitations such as - inability to do locked pages accounting in real time. + inability to do pinned pages accounting in real time. 4) According to sPAPR specification, A Partitionable Endpoint (PE) is an I/O subtree that can be treated as a unit for the purposes of partitioning and @@ -324,7 +324,7 @@ This implementation has some specifics: returns the size and the start of the DMA window on the PCI bus. VFIO_IOMMU_ENABLE - enables the container. The locked pages accounting + enables the container. The pinned pages accounting is done at this point. This lets user first to know what the DMA window is and adjust rlimit before doing any real job. @@ -454,7 +454,7 @@ This implementation has some specifics: PPC64 paravirtualized guests generate a lot of map/unmap requests, and the handling of those includes pinning/unpinning pages and updating - mm::locked_vm counter to make sure we do not exceed the rlimit. + mm::pinned_vm counter to make sure we do not exceed the rlimit. The v2 IOMMU splits accounting and pinning into separate operations: - VFIO_IOMMU_SPAPR_REGISTER_MEMORY/VFIO_IOMMU_SPAPR_UNREGISTER_MEMORY ioctls diff --git a/drivers/vfio/vfio_iommu_spapr_tce.c b/drivers/vfio/vfio_iommu_spapr_tce.c index c424913324e3..f47e020dc5e4 100644 --- a/drivers/vfio/vfio_iommu_spapr_tce.c +++ b/drivers/vfio/vfio_iommu_spapr_tce.c @@ -34,9 +34,11 @@ static void tce_iommu_detach_group(void *iommu_data, struct iommu_group *iommu_group); -static long try_increment_locked_vm(struct mm_struct *mm, long npages) +static long try_increment_pinned_vm(struct mm_struct *mm, long npages) { - long ret = 0, locked, lock_limit; + long ret = 0; + s64 pinned; + unsigned long lock_limit; if (WARN_ON_ONCE(!mm)) return -EPERM; @@ -44,39 +46,33 @@ static long try_increment_locked_vm(struct mm_struct *mm, long npages) if (!npages) return 0; - down_write(&mm->mmap_sem); - locked = mm->locked_vm + npages; + pinned = atomic64_add_return(npages, &mm->pinned_vm); lock_limit = rlimit(RLIMIT_MEMLOCK) >> PAGE_SHIFT; - if (locked > lock_limit && !capable(CAP_IPC_LOCK)) + if (pinned > lock_limit && !capable(CAP_IPC_LOCK)) { ret = -ENOMEM; - else - mm->locked_vm += npages; + atomic64_sub(npages, &mm->pinned_vm); + } - pr_debug("[%d] RLIMIT_MEMLOCK +%ld %ld/%ld%s\n", current->pid, + pr_debug("[%d] RLIMIT_MEMLOCK +%ld %ld/%lu%s\n", current->pid, npages << PAGE_SHIFT, - mm->locked_vm << PAGE_SHIFT, - rlimit(RLIMIT_MEMLOCK), - ret ? " - exceeded" : ""); - - up_write(&mm->mmap_sem); + atomic64_read(&mm->pinned_vm) << PAGE_SHIFT, + rlimit(RLIMIT_MEMLOCK), ret ? " - exceeded" : ""); return ret; } -static void decrement_locked_vm(struct mm_struct *mm, long npages) +static void decrement_pinned_vm(struct mm_struct *mm, long npages) { if (!mm || !npages) return; - down_write(&mm->mmap_sem); - if (WARN_ON_ONCE(npages > mm->locked_vm)) - npages = mm->locked_vm; - mm->locked_vm -= npages; - pr_debug("[%d] RLIMIT_MEMLOCK -%ld %ld/%ld\n", current->pid, + if (WARN_ON_ONCE(npages > atomic64_read(&mm->pinned_vm))) + npages = atomic64_read(&mm->pinned_vm); + atomic64_sub(npages, &mm->pinned_vm); + pr_debug("[%d] RLIMIT_MEMLOCK -%ld %ld/%lu\n", current->pid, npages << PAGE_SHIFT, - mm->locked_vm << PAGE_SHIFT, + atomic64_read(&mm->pinned_vm) << PAGE_SHIFT, rlimit(RLIMIT_MEMLOCK)); - up_write(&mm->mmap_sem); } /* @@ -110,7 +106,7 @@ struct tce_container { bool enabled; bool v2; bool def_window_pending; - unsigned long locked_pages; + unsigned long pinned_pages; struct mm_struct *mm; struct iommu_table *tables[IOMMU_TABLE_GROUP_MAX_TABLES]; struct list_head group_list; @@ -283,7 +279,7 @@ static int tce_iommu_find_free_table(struct tce_container *container) static int tce_iommu_enable(struct tce_container *container) { int ret = 0; - unsigned long locked; + unsigned long pinned; struct iommu_table_group *table_group; struct tce_iommu_group *tcegrp; @@ -292,15 +288,15 @@ static int tce_iommu_enable(struct tce_container *container) /* * When userspace pages are mapped into the IOMMU, they are effectively - * locked memory, so, theoretically, we need to update the accounting - * of locked pages on each map and unmap. For powerpc, the map unmap + * pinned memory, so, theoretically, we need to update the accounting + * of pinned pages on each map and unmap. For powerpc, the map unmap * paths can be very hot, though, and the accounting would kill * performance, especially since it would be difficult to impossible * to handle the accounting in real mode only. * * To address that, rather than precisely accounting every page, we - * instead account for a worst case on locked memory when the iommu is - * enabled and disabled. The worst case upper bound on locked memory + * instead account for a worst case on pinned memory when the iommu is + * enabled and disabled. The worst case upper bound on pinned memory * is the size of the whole iommu window, which is usually relatively * small (compared to total memory sizes) on POWER hardware. * @@ -317,7 +313,7 @@ static int tce_iommu_enable(struct tce_container *container) * * So we do not allow enabling a container without a group attached * as there is no way to know how much we should increment - * the locked_vm counter. + * the pinned_vm counter. */ if (!tce_groups_attached(container)) return -ENODEV; @@ -335,12 +331,12 @@ static int tce_iommu_enable(struct tce_container *container) if (ret) return ret; - locked = table_group->tce32_size >> PAGE_SHIFT; - ret = try_increment_locked_vm(container->mm, locked); + pinned = table_group->tce32_size >> PAGE_SHIFT; + ret = try_increment_pinned_vm(container->mm, pinned); if (ret) return ret; - container->locked_pages = locked; + container->pinned_pages = pinned; container->enabled = true; @@ -355,7 +351,7 @@ static void tce_iommu_disable(struct tce_container *container) container->enabled = false; BUG_ON(!container->mm); - decrement_locked_vm(container->mm, container->locked_pages); + decrement_pinned_vm(container->mm, container->pinned_pages); } static void *tce_iommu_open(unsigned long arg) @@ -658,7 +654,7 @@ static long tce_iommu_create_table(struct tce_container *container, if (!table_size) return -EINVAL; - ret = try_increment_locked_vm(container->mm, table_size >> PAGE_SHIFT); + ret = try_increment_pinned_vm(container->mm, table_size >> PAGE_SHIFT); if (ret) return ret; @@ -677,7 +673,7 @@ static void tce_iommu_free_table(struct tce_container *container, unsigned long pages = tbl->it_allocated_size >> PAGE_SHIFT; iommu_tce_table_put(tbl); - decrement_locked_vm(container->mm, pages); + decrement_pinned_vm(container->mm, pages); } static long tce_iommu_create_window(struct tce_container *container, From patchwork Mon Feb 11 22:44:35 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Daniel Jordan X-Patchwork-Id: 10806915 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id B36EA13A4 for ; Mon, 11 Feb 2019 22:46:58 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id A54742B26B for ; Mon, 11 Feb 2019 22:46:58 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 98DA32B39C; Mon, 11 Feb 2019 22:46:58 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-8.0 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,MAILING_LIST_MULTI,RCVD_IN_DNSWL_HI, UNPARSEABLE_RELAY autolearn=unavailable version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 38C822B26B for ; Mon, 11 Feb 2019 22:46:58 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727654AbfBKWqu (ORCPT ); Mon, 11 Feb 2019 17:46:50 -0500 Received: from userp2130.oracle.com ([156.151.31.86]:59980 "EHLO userp2130.oracle.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727226AbfBKWqX (ORCPT ); Mon, 11 Feb 2019 17:46:23 -0500 Received: from pps.filterd (userp2130.oracle.com [127.0.0.1]) by userp2130.oracle.com (8.16.0.27/8.16.0.27) with SMTP id x1BMhX6S072792; Mon, 11 Feb 2019 22:44:57 GMT DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=oracle.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding; s=corp-2018-07-02; bh=TgUKT8mxeCnG73KFILQS5yx+gXU/1gCSnq77DsUA3Jw=; b=0ANS8qxu8TUrSovIxqwwoe9wU2/vLFPDYfYCtTSQUkAD9qLOpgMzZevM/x0UcMlhK49Z 7aIjfLZEMOusmCpnPd7YQ31hCy9VcXSDJiZZskRnJmzG7fIiL7CrWmf0TPps/G8H56L6 pKN8SS+SgWvazK/god5fAnt2HFhSt6tp281BNStRPuBg8jPGY3z0TjRy+iinS9R5eVEA 5JtKnNRRE89P2LS14sT7f/TuTPjdcNU2+zW0EmPcFQRmAOptWifW9G9c17HgjQ1ebW5l sRJbXxCq2H0Ra5R5NNQ4XeDzlr3cdF3L+POvvYoQLjhrrMX9gPW7g3olX5OMeoWlyTis Vg== Received: from userv0022.oracle.com (userv0022.oracle.com [156.151.31.74]) by userp2130.oracle.com with ESMTP id 2qhrek8q85-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK); Mon, 11 Feb 2019 22:44:56 +0000 Received: from userv0122.oracle.com (userv0122.oracle.com [156.151.31.75]) by userv0022.oracle.com (8.14.4/8.14.4) with ESMTP id x1BMipfj030923 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK); Mon, 11 Feb 2019 22:44:51 GMT Received: from abhmp0022.oracle.com (abhmp0022.oracle.com [141.146.116.28]) by userv0122.oracle.com (8.14.4/8.14.4) with ESMTP id x1BMio0A006609; Mon, 11 Feb 2019 22:44:50 GMT Received: from localhost.localdomain (/73.60.114.248) by default (Oracle Beehive Gateway v4.0) with ESMTP ; Mon, 11 Feb 2019 14:44:50 -0800 From: Daniel Jordan To: jgg@ziepe.ca Cc: akpm@linux-foundation.org, dave@stgolabs.net, jack@suse.cz, cl@linux.com, linux-mm@kvack.org, kvm@vger.kernel.org, kvm-ppc@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, linux-fpga@vger.kernel.org, linux-kernel@vger.kernel.org, alex.williamson@redhat.com, paulus@ozlabs.org, benh@kernel.crashing.org, mpe@ellerman.id.au, hao.wu@intel.com, atull@kernel.org, mdf@kernel.org, aik@ozlabs.ru, daniel.m.jordan@oracle.com Subject: [PATCH 3/5] fpga/dlf/afu: use pinned_vm instead of locked_vm to account pinned pages Date: Mon, 11 Feb 2019 17:44:35 -0500 Message-Id: <20190211224437.25267-4-daniel.m.jordan@oracle.com> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20190211224437.25267-1-daniel.m.jordan@oracle.com> References: <20190211224437.25267-1-daniel.m.jordan@oracle.com> MIME-Version: 1.0 X-Proofpoint-Virus-Version: vendor=nai engine=5900 definitions=9164 signatures=668683 X-Proofpoint-Spam-Details: rule=notspam policy=default score=0 priorityscore=1501 malwarescore=0 suspectscore=3 phishscore=0 bulkscore=0 spamscore=0 clxscore=1015 lowpriorityscore=0 mlxscore=0 impostorscore=0 mlxlogscore=999 adultscore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.0.1-1810050000 definitions=main-1902110162 Sender: linux-fpga-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-fpga@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP Beginning with bc3e53f682d9 ("mm: distinguish between mlocked and pinned pages"), locked and pinned pages are accounted separately. The FPGA AFU driver accounts pinned pages to locked_vm; use pinned_vm instead. pinned_vm recently became atomic and so no longer relies on mmap_sem held as writer: delete. Signed-off-by: Daniel Jordan --- drivers/fpga/dfl-afu-dma-region.c | 50 ++++++++++++++----------------- 1 file changed, 23 insertions(+), 27 deletions(-) diff --git a/drivers/fpga/dfl-afu-dma-region.c b/drivers/fpga/dfl-afu-dma-region.c index e18a786fc943..a9a6b317fe2e 100644 --- a/drivers/fpga/dfl-afu-dma-region.c +++ b/drivers/fpga/dfl-afu-dma-region.c @@ -32,47 +32,43 @@ void afu_dma_region_init(struct dfl_feature_platform_data *pdata) } /** - * afu_dma_adjust_locked_vm - adjust locked memory + * afu_dma_adjust_pinned_vm - adjust pinned memory * @dev: port device * @npages: number of pages - * @incr: increase or decrease locked memory * - * Increase or decrease the locked memory size with npages input. + * Increase or decrease the pinned memory size with npages input. * * Return 0 on success. - * Return -ENOMEM if locked memory size is over the limit and no CAP_IPC_LOCK. + * Return -ENOMEM if pinned memory size is over the limit and no CAP_IPC_LOCK. */ -static int afu_dma_adjust_locked_vm(struct device *dev, long npages, bool incr) +static int afu_dma_adjust_pinned_vm(struct device *dev, long pages) { - unsigned long locked, lock_limit; + unsigned long lock_limit; + s64 pinned_vm; int ret = 0; /* the task is exiting. */ - if (!current->mm) + if (!current->mm || !pages) return 0; - down_write(¤t->mm->mmap_sem); - - if (incr) { - locked = current->mm->locked_vm + npages; + if (pages > 0) { lock_limit = rlimit(RLIMIT_MEMLOCK) >> PAGE_SHIFT; - - if (locked > lock_limit && !capable(CAP_IPC_LOCK)) + pinned_vm = atomic64_add_return(pages, ¤t->mm->pinned_vm); + if (pinned_vm > lock_limit && !capable(CAP_IPC_LOCK)) { ret = -ENOMEM; - else - current->mm->locked_vm += npages; + atomic64_sub(pages, ¤t->mm->pinned_vm); + } } else { - if (WARN_ON_ONCE(npages > current->mm->locked_vm)) - npages = current->mm->locked_vm; - current->mm->locked_vm -= npages; + pinned_vm = atomic64_read(¤t->mm->pinned_vm); + if (WARN_ON_ONCE(pages > pinned_vm)) + pages = pinned_vm; + atomic64_sub(pages, ¤t->mm->pinned_vm); } - dev_dbg(dev, "[%d] RLIMIT_MEMLOCK %c%ld %ld/%ld%s\n", current->pid, - incr ? '+' : '-', npages << PAGE_SHIFT, - current->mm->locked_vm << PAGE_SHIFT, rlimit(RLIMIT_MEMLOCK), - ret ? "- exceeded" : ""); - - up_write(¤t->mm->mmap_sem); + dev_dbg(dev, "[%d] RLIMIT_MEMLOCK %c%ld %lld/%lu%s\n", current->pid, + (pages > 0) ? '+' : '-', pages << PAGE_SHIFT, + (s64)atomic64_read(¤t->mm->pinned_vm) << PAGE_SHIFT, + rlimit(RLIMIT_MEMLOCK), ret ? "- exceeded" : ""); return ret; } @@ -92,7 +88,7 @@ static int afu_dma_pin_pages(struct dfl_feature_platform_data *pdata, struct device *dev = &pdata->dev->dev; int ret, pinned; - ret = afu_dma_adjust_locked_vm(dev, npages, true); + ret = afu_dma_adjust_pinned_vm(dev, npages); if (ret) return ret; @@ -121,7 +117,7 @@ static int afu_dma_pin_pages(struct dfl_feature_platform_data *pdata, free_pages: kfree(region->pages); unlock_vm: - afu_dma_adjust_locked_vm(dev, npages, false); + afu_dma_adjust_pinned_vm(dev, -npages); return ret; } @@ -141,7 +137,7 @@ static void afu_dma_unpin_pages(struct dfl_feature_platform_data *pdata, put_all_pages(region->pages, npages); kfree(region->pages); - afu_dma_adjust_locked_vm(dev, npages, false); + afu_dma_adjust_pinned_vm(dev, -npages); dev_dbg(dev, "%ld pages unpinned\n", npages); } From patchwork Mon Feb 11 22:44:36 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Daniel Jordan X-Patchwork-Id: 10806903 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 6525113BF for ; Mon, 11 Feb 2019 22:46:42 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 572722B26B for ; Mon, 11 Feb 2019 22:46:42 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 4B14D2B39C; Mon, 11 Feb 2019 22:46:42 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-8.0 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,MAILING_LIST_MULTI,RCVD_IN_DNSWL_HI, UNPARSEABLE_RELAY autolearn=unavailable version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id E126C2B26B for ; Mon, 11 Feb 2019 22:46:41 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727713AbfBKWqX (ORCPT ); Mon, 11 Feb 2019 17:46:23 -0500 Received: from aserp2130.oracle.com ([141.146.126.79]:38646 "EHLO aserp2130.oracle.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727455AbfBKWqX (ORCPT ); Mon, 11 Feb 2019 17:46:23 -0500 Received: from pps.filterd (aserp2130.oracle.com [127.0.0.1]) by aserp2130.oracle.com (8.16.0.27/8.16.0.27) with SMTP id x1BMhngk080623; Mon, 11 Feb 2019 22:44:54 GMT DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=oracle.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding; s=corp-2018-07-02; bh=69rc/HLjHXpWY00W09MehBQXPdb8h1Mg0HmD/buXmQE=; b=Gfr3nMzNWtSmRQKUAlaR6K8BOIJHCFY5nf7j4z47WYheSxOUPHcxULWcuty9OQxO0yp2 n8KtHxjsTHToGobqYK2XyioonKcPd+2gO4XOlGyT4wo94g19guNjk1xy0PQDMuAWpQz6 1mkYSXvlssEu7NmA2mavemTv8Jm8va+5wWBgcyLRLf8cRhLNsKhASO38yS1wutvmfHyo Q4YxEr2lSht1W9wx1znxe+Axbs9fslLbFTJpAYpNxLs4mzJt0+d6gGyMQmQfIX/YKOI1 0mT793Yti8IDnYpZsgFhj6zQoEJ4IUvUFOG7BzbxiPXaBRsCOPVWULWVhz8YoLY5CqyM sg== Received: from userv0022.oracle.com (userv0022.oracle.com [156.151.31.74]) by aserp2130.oracle.com with ESMTP id 2qhre58p9q-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK); Mon, 11 Feb 2019 22:44:54 +0000 Received: from userv0121.oracle.com (userv0121.oracle.com [156.151.31.72]) by userv0022.oracle.com (8.14.4/8.14.4) with ESMTP id x1BMirBK030983 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK); Mon, 11 Feb 2019 22:44:53 GMT Received: from abhmp0022.oracle.com (abhmp0022.oracle.com [141.146.116.28]) by userv0121.oracle.com (8.14.4/8.13.8) with ESMTP id x1BMiqrQ026826; Mon, 11 Feb 2019 22:44:52 GMT Received: from localhost.localdomain (/73.60.114.248) by default (Oracle Beehive Gateway v4.0) with ESMTP ; Mon, 11 Feb 2019 14:44:52 -0800 From: Daniel Jordan To: jgg@ziepe.ca Cc: akpm@linux-foundation.org, dave@stgolabs.net, jack@suse.cz, cl@linux.com, linux-mm@kvack.org, kvm@vger.kernel.org, kvm-ppc@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, linux-fpga@vger.kernel.org, linux-kernel@vger.kernel.org, alex.williamson@redhat.com, paulus@ozlabs.org, benh@kernel.crashing.org, mpe@ellerman.id.au, hao.wu@intel.com, atull@kernel.org, mdf@kernel.org, aik@ozlabs.ru, daniel.m.jordan@oracle.com Subject: [PATCH 4/5] powerpc/mmu: use pinned_vm instead of locked_vm to account pinned pages Date: Mon, 11 Feb 2019 17:44:36 -0500 Message-Id: <20190211224437.25267-5-daniel.m.jordan@oracle.com> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20190211224437.25267-1-daniel.m.jordan@oracle.com> References: <20190211224437.25267-1-daniel.m.jordan@oracle.com> MIME-Version: 1.0 X-Proofpoint-Virus-Version: vendor=nai engine=5900 definitions=9164 signatures=668683 X-Proofpoint-Spam-Details: rule=notspam policy=default score=0 priorityscore=1501 malwarescore=0 suspectscore=1 phishscore=0 bulkscore=0 spamscore=0 clxscore=1015 lowpriorityscore=0 mlxscore=0 impostorscore=0 mlxlogscore=999 adultscore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.0.1-1810050000 definitions=main-1902110162 Sender: linux-fpga-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-fpga@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP Beginning with bc3e53f682d9 ("mm: distinguish between mlocked and pinned pages"), locked and pinned pages are accounted separately. The IOMMU MMU helpers on powerpc account pinned pages to locked_vm; use pinned_vm instead. pinned_vm recently became atomic and so no longer relies on mmap_sem held as writer: delete. Signed-off-by: Daniel Jordan --- arch/powerpc/mm/mmu_context_iommu.c | 43 ++++++++++++++--------------- 1 file changed, 21 insertions(+), 22 deletions(-) diff --git a/arch/powerpc/mm/mmu_context_iommu.c b/arch/powerpc/mm/mmu_context_iommu.c index a712a650a8b6..fdf670542847 100644 --- a/arch/powerpc/mm/mmu_context_iommu.c +++ b/arch/powerpc/mm/mmu_context_iommu.c @@ -40,36 +40,35 @@ struct mm_iommu_table_group_mem_t { u64 dev_hpa; /* Device memory base address */ }; -static long mm_iommu_adjust_locked_vm(struct mm_struct *mm, +static long mm_iommu_adjust_pinned_vm(struct mm_struct *mm, unsigned long npages, bool incr) { - long ret = 0, locked, lock_limit; + long ret = 0; + unsigned long lock_limit; + s64 pinned_vm; if (!npages) return 0; - down_write(&mm->mmap_sem); - if (incr) { - locked = mm->locked_vm + npages; lock_limit = rlimit(RLIMIT_MEMLOCK) >> PAGE_SHIFT; - if (locked > lock_limit && !capable(CAP_IPC_LOCK)) + pinned_vm = atomic64_add_return(npages, &mm->pinned_vm); + if (pinned_vm > lock_limit && !capable(CAP_IPC_LOCK)) { ret = -ENOMEM; - else - mm->locked_vm += npages; + atomic64_sub(npages, &mm->pinned_vm); + } } else { - if (WARN_ON_ONCE(npages > mm->locked_vm)) - npages = mm->locked_vm; - mm->locked_vm -= npages; + pinned_vm = atomic64_read(&mm->pinned_vm); + if (WARN_ON_ONCE(npages > pinned_vm)) + npages = pinned_vm; + atomic64_sub(npages, &mm->pinned_vm); } - pr_debug("[%d] RLIMIT_MEMLOCK HASH64 %c%ld %ld/%ld\n", - current ? current->pid : 0, - incr ? '+' : '-', + pr_debug("[%d] RLIMIT_MEMLOCK HASH64 %c%lu %ld/%lu\n", + current ? current->pid : 0, incr ? '+' : '-', npages << PAGE_SHIFT, - mm->locked_vm << PAGE_SHIFT, + atomic64_read(&mm->pinned_vm) << PAGE_SHIFT, rlimit(RLIMIT_MEMLOCK)); - up_write(&mm->mmap_sem); return ret; } @@ -133,7 +132,7 @@ static long mm_iommu_do_alloc(struct mm_struct *mm, unsigned long ua, struct mm_iommu_table_group_mem_t **pmem) { struct mm_iommu_table_group_mem_t *mem; - long i, j, ret = 0, locked_entries = 0; + long i, j, ret = 0, pinned_entries = 0; unsigned int pageshift; unsigned long flags; unsigned long cur_ua; @@ -154,11 +153,11 @@ static long mm_iommu_do_alloc(struct mm_struct *mm, unsigned long ua, } if (dev_hpa == MM_IOMMU_TABLE_INVALID_HPA) { - ret = mm_iommu_adjust_locked_vm(mm, entries, true); + ret = mm_iommu_adjust_pinned_vm(mm, entries, true); if (ret) goto unlock_exit; - locked_entries = entries; + pinned_entries = entries; } mem = kzalloc(sizeof(*mem), GFP_KERNEL); @@ -252,8 +251,8 @@ static long mm_iommu_do_alloc(struct mm_struct *mm, unsigned long ua, list_add_rcu(&mem->next, &mm->context.iommu_group_mem_list); unlock_exit: - if (locked_entries && ret) - mm_iommu_adjust_locked_vm(mm, locked_entries, false); + if (pinned_entries && ret) + mm_iommu_adjust_pinned_vm(mm, pinned_entries, false); mutex_unlock(&mem_list_mutex); @@ -352,7 +351,7 @@ long mm_iommu_put(struct mm_struct *mm, struct mm_iommu_table_group_mem_t *mem) mm_iommu_release(mem); if (dev_hpa == MM_IOMMU_TABLE_INVALID_HPA) - mm_iommu_adjust_locked_vm(mm, entries, false); + mm_iommu_adjust_pinned_vm(mm, entries, false); unlock_exit: mutex_unlock(&mem_list_mutex); From patchwork Mon Feb 11 22:44:37 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Daniel Jordan X-Patchwork-Id: 10806909 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 6A33E13A4 for ; Mon, 11 Feb 2019 22:46:49 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 5B9052B26B for ; Mon, 11 Feb 2019 22:46:49 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 4F9862B39C; Mon, 11 Feb 2019 22:46:49 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-8.0 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,MAILING_LIST_MULTI,RCVD_IN_DNSWL_HI, UNPARSEABLE_RELAY autolearn=unavailable version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id EF1252B26B for ; Mon, 11 Feb 2019 22:46:48 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727740AbfBKWqn (ORCPT ); Mon, 11 Feb 2019 17:46:43 -0500 Received: from userp2120.oracle.com ([156.151.31.85]:50148 "EHLO userp2120.oracle.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727654AbfBKWqX (ORCPT ); Mon, 11 Feb 2019 17:46:23 -0500 Received: from pps.filterd (userp2120.oracle.com [127.0.0.1]) by userp2120.oracle.com (8.16.0.27/8.16.0.27) with SMTP id x1BMhV8N079558; Mon, 11 Feb 2019 22:44:56 GMT DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=oracle.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding; s=corp-2018-07-02; bh=8MbdNDL3CBMqXq9cKy6Blm5jHVV6AhFQ5tncilAoYP0=; b=EG0ZXxRIiRONLX+NDCb7AU32LV8AKND2VEChzN77taJ7iEGUuI1z4OoUMuQpnZicawWe RXogTQPXYeYaXWw3MQUOEFU7gL1zAD2Ly8l/sIUldPQdQPyEW3RIW/4wAuYoavKVlWHU Y2rrABeTstBOJEOlT0RlbS8RAkrfdCFJMFS+J5b+K7Zo4RkNB1ReF/V2w6BHsqRkDykn Mssyo/lzb7IEt0LAlz/bGhphSCrT7nJe17mf3n30G8KdVDWwDt8P3O+iLjqbtOvrBcmk fBm5gS0hG0WZBzuHwkJytEj2gdnYfXVTzUyWnxy8cbaVmOZFgHqQqlGzGOKFSxmz9wuU dQ== Received: from userv0022.oracle.com (userv0022.oracle.com [156.151.31.74]) by userp2120.oracle.com with ESMTP id 2qhredrqvs-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK); Mon, 11 Feb 2019 22:44:56 +0000 Received: from userv0121.oracle.com (userv0121.oracle.com [156.151.31.72]) by userv0022.oracle.com (8.14.4/8.14.4) with ESMTP id x1BMitAb031041 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK); Mon, 11 Feb 2019 22:44:55 GMT Received: from abhmp0022.oracle.com (abhmp0022.oracle.com [141.146.116.28]) by userv0121.oracle.com (8.14.4/8.13.8) with ESMTP id x1BMisBM026862; Mon, 11 Feb 2019 22:44:54 GMT Received: from localhost.localdomain (/73.60.114.248) by default (Oracle Beehive Gateway v4.0) with ESMTP ; Mon, 11 Feb 2019 14:44:54 -0800 From: Daniel Jordan To: jgg@ziepe.ca Cc: akpm@linux-foundation.org, dave@stgolabs.net, jack@suse.cz, cl@linux.com, linux-mm@kvack.org, kvm@vger.kernel.org, kvm-ppc@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, linux-fpga@vger.kernel.org, linux-kernel@vger.kernel.org, alex.williamson@redhat.com, paulus@ozlabs.org, benh@kernel.crashing.org, mpe@ellerman.id.au, hao.wu@intel.com, atull@kernel.org, mdf@kernel.org, aik@ozlabs.ru, daniel.m.jordan@oracle.com Subject: [PATCH 5/5] kvm/book3s: use pinned_vm instead of locked_vm to account pinned pages Date: Mon, 11 Feb 2019 17:44:37 -0500 Message-Id: <20190211224437.25267-6-daniel.m.jordan@oracle.com> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20190211224437.25267-1-daniel.m.jordan@oracle.com> References: <20190211224437.25267-1-daniel.m.jordan@oracle.com> MIME-Version: 1.0 X-Proofpoint-Virus-Version: vendor=nai engine=5900 definitions=9164 signatures=668683 X-Proofpoint-Spam-Details: rule=notspam policy=default score=0 priorityscore=1501 malwarescore=0 suspectscore=1 phishscore=0 bulkscore=0 spamscore=0 clxscore=1015 lowpriorityscore=0 mlxscore=0 impostorscore=0 mlxlogscore=999 adultscore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.0.1-1810050000 definitions=main-1902110162 Sender: linux-fpga-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-fpga@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP Memory used for TCE tables in kvm_vm_ioctl_create_spapr_tce is currently accounted to locked_vm because it stays resident and its allocation is directly triggered from userspace as explained in f8626985c7c2 ("KVM: PPC: Account TCE-containing pages in locked_vm"). However, since the memory comes straight from the page allocator (and to a lesser extent unreclaimable slab) and is effectively pinned, it should be accounted with pinned_vm (see bc3e53f682d9 ("mm: distinguish between mlocked and pinned pages")). pinned_vm recently became atomic and so no longer relies on mmap_sem held as writer: delete. Signed-off-by: Daniel Jordan --- arch/powerpc/kvm/book3s_64_vio.c | 35 ++++++++++++++------------------ 1 file changed, 15 insertions(+), 20 deletions(-) diff --git a/arch/powerpc/kvm/book3s_64_vio.c b/arch/powerpc/kvm/book3s_64_vio.c index 532ab79734c7..2f8d7c051e4e 100644 --- a/arch/powerpc/kvm/book3s_64_vio.c +++ b/arch/powerpc/kvm/book3s_64_vio.c @@ -56,39 +56,34 @@ static unsigned long kvmppc_stt_pages(unsigned long tce_pages) return tce_pages + ALIGN(stt_bytes, PAGE_SIZE) / PAGE_SIZE; } -static long kvmppc_account_memlimit(unsigned long stt_pages, bool inc) +static long kvmppc_account_memlimit(unsigned long pages, bool inc) { long ret = 0; + s64 pinned_vm; if (!current || !current->mm) return ret; /* process exited */ - down_write(¤t->mm->mmap_sem); - if (inc) { - unsigned long locked, lock_limit; + unsigned long lock_limit = rlimit(RLIMIT_MEMLOCK) >> PAGE_SHIFT; - locked = current->mm->locked_vm + stt_pages; - lock_limit = rlimit(RLIMIT_MEMLOCK) >> PAGE_SHIFT; - if (locked > lock_limit && !capable(CAP_IPC_LOCK)) + pinned_vm = atomic64_add_return(pages, ¤t->mm->pinned_vm); + if (pinned_vm > lock_limit && !capable(CAP_IPC_LOCK)) { ret = -ENOMEM; - else - current->mm->locked_vm += stt_pages; + atomic64_sub(pages, ¤t->mm->pinned_vm); + } } else { - if (WARN_ON_ONCE(stt_pages > current->mm->locked_vm)) - stt_pages = current->mm->locked_vm; + pinned_vm = atomic64_read(¤t->mm->pinned_vm); + if (WARN_ON_ONCE(pages > pinned_vm)) + pages = pinned_vm; - current->mm->locked_vm -= stt_pages; + atomic64_sub(pages, ¤t->mm->pinned_vm); } - pr_debug("[%d] RLIMIT_MEMLOCK KVM %c%ld %ld/%ld%s\n", current->pid, - inc ? '+' : '-', - stt_pages << PAGE_SHIFT, - current->mm->locked_vm << PAGE_SHIFT, - rlimit(RLIMIT_MEMLOCK), - ret ? " - exceeded" : ""); - - up_write(¤t->mm->mmap_sem); + pr_debug("[%d] RLIMIT_MEMLOCK KVM %c%lu %ld/%lu%s\n", current->pid, + inc ? '+' : '-', pages << PAGE_SHIFT, + atomic64_read(¤t->mm->pinned_vm) << PAGE_SHIFT, + rlimit(RLIMIT_MEMLOCK), ret ? " - exceeded" : ""); return ret; }