From patchwork Thu Sep 27 16:59:41 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Yang Shi X-Patchwork-Id: 10618345 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 2BE4E913 for ; Thu, 27 Sep 2018 17:00:00 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 18B7F2B957 for ; Thu, 27 Sep 2018 17:00:00 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 0CA4F2B9E8; Thu, 27 Sep 2018 17:00:00 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-2.9 required=2.0 tests=BAYES_00,MAILING_LIST_MULTI, RCVD_IN_DNSWL_NONE,UNPARSEABLE_RELAY autolearn=ham version=3.3.1 Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 6B78D2B957 for ; Thu, 27 Sep 2018 16:59:56 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id C49098E0002; Thu, 27 Sep 2018 12:59:54 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id BF89E8E0001; Thu, 27 Sep 2018 12:59:54 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id AC3958E0002; Thu, 27 Sep 2018 12:59:54 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from mail-pg1-f197.google.com (mail-pg1-f197.google.com [209.85.215.197]) by kanga.kvack.org (Postfix) with ESMTP id 6B0CE8E0001 for ; Thu, 27 Sep 2018 12:59:54 -0400 (EDT) Received: by mail-pg1-f197.google.com with SMTP id 132-v6so3354759pga.18 for ; Thu, 27 Sep 2018 09:59:54 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-original-authentication-results:x-gm-message-state:from:to:cc :subject:date:message-id; bh=mLIryZfQcn3fyferW8ABDrBaJJkkZUdzDzshGWQDB8s=; b=AEHDLbeewzDzepL0C/3fx5Aw6G7ZwpQFWQqvz/F2G9Icfo6+1TMINOTjYekHHjJOrs eFZJC4VQZCCSzLFI25nXvTxATuQUMatf8A7y0ABy4dRBWtl2VnHaIFQN3i6j9qt8oLDu UKcrRnmK65atvQNAYSTwCaLGjmbs+LzXIWxegeoFdqskogm6A+ILvUYtGe5ZFtd3Wtbc CvGAHplqWAuBIj2t+f6F24SgPUnhWvCN3tUECejVu0NwaLIjTtE2Rj6bkXgxpMcst+hW oxy1+fO9pLqCo/Ztne9rV7Oc73iK/2GF5KUZMQ9/vhd3kXYz8u7bQHCSEiSXgPcJErmW Lgfg== X-Original-Authentication-Results: mx.google.com; spf=pass (google.com: domain of yang.shi@linux.alibaba.com designates 115.124.30.132 as permitted sender) smtp.mailfrom=yang.shi@linux.alibaba.com; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=alibaba.com X-Gm-Message-State: ABuFfoinM4I+hyyFqLgwf9nO2Dvmni/bWpa2x4BEd7B13yIwyj0bpuZi 5fTevlH7naH7+AVpYGZRfnF/pShl2grXqXYig+BMtg+1t2fCZvwhPRnK66V7uPEJ0WpLV2Rx31V Xqnnj4xzbHIviT1W7d3dY3g/3oNJXWBcHwY/iUerVk7UyWDyUDr/pH7vS41J8pwtZBA== X-Received: by 2002:a17:902:6b09:: with SMTP id o9-v6mr8402928plk.316.1538067594072; Thu, 27 Sep 2018 09:59:54 -0700 (PDT) X-Google-Smtp-Source: ACcGV636VjNNwP9JA3LLj2TALF/4tP14OgZv1Gncbv3ojvIy45Urt2alyfQBtV9m7/TkdhLEy6za X-Received: by 2002:a17:902:6b09:: with SMTP id o9-v6mr8402869plk.316.1538067593122; Thu, 27 Sep 2018 09:59:53 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1538067593; cv=none; d=google.com; s=arc-20160816; b=o2VGgNfW9vLT0NjA1Zz3OsbD70iyJ14LpAUPMrIeCERkyQATIsRnh8tlqiOwVbNRT0 f/+CR2Ce1M3TPq7oep3QIfpGyK1QisxtvvIk5LtmWx/VPNRj+ZXe7PllzkZBGgx0Q3sb NrS5RTwqwkLMj/ChX/NxXFPDU2Xva0K9rYLGy3EVCFhwhn2LXIS7u6r9Pg0Bl6hVi2MI 2+Ztnnk5Bo/K4XpqsxirkhZrCWquGXhI5v9ugPSjfHkSkBL4lk84KWqdZuyXhknXI+pF DbzRlJ52ptomOCOkmMELVuf3F57xJobMkSmGQGbzN2gv8AzK0mrsMi4dBSKH9Up8Wvlw 9gDQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=message-id:date:subject:cc:to:from; bh=mLIryZfQcn3fyferW8ABDrBaJJkkZUdzDzshGWQDB8s=; b=QAO6pa1buS4gSmhDavhpHBusaLNX90dKq1ckN7xyJFrhhva0qMKbwYtNwxj+CdSTd6 9+v8jPvTuEPZTeHFM/4CuJvmaQtDzbU407ijUSEqeJ/l8sTtQiUM+DrGeSX8TRPkpeYx q0fxTfgscrbXQ51ezntV70suPEHReL+SyVNybBRl0Nw9E52IhFHHBbPbN3oezhHgo6uG k0WH7jolJOW+abHtO2Sd2YzOFYzNjCugz9X8GY+XR02TQwVR/cJdBx99u3aqUeE6UiFW 7H0BTtdjsMnIqfCEwB+E+Oe47xP16r8fcFA+SVM1vYdD7DrhUO/6Axwaek9yT1kvWolG rBtg== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of yang.shi@linux.alibaba.com designates 115.124.30.132 as permitted sender) smtp.mailfrom=yang.shi@linux.alibaba.com; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=alibaba.com Received: from out30-132.freemail.mail.aliyun.com (out30-132.freemail.mail.aliyun.com. [115.124.30.132]) by mx.google.com with ESMTPS id d4-v6si2359186pgt.687.2018.09.27.09.59.52 for (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Thu, 27 Sep 2018 09:59:53 -0700 (PDT) Received-SPF: pass (google.com: domain of yang.shi@linux.alibaba.com designates 115.124.30.132 as permitted sender) client-ip=115.124.30.132; Authentication-Results: mx.google.com; spf=pass (google.com: domain of yang.shi@linux.alibaba.com designates 115.124.30.132 as permitted sender) smtp.mailfrom=yang.shi@linux.alibaba.com; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=alibaba.com X-Alimail-AntiSpam: AC=PASS;BC=-1|-1;BR=01201311R171e4;CH=green;FP=0|-1|-1|-1|0|-1|-1|-1;HT=e01e07487;MF=yang.shi@linux.alibaba.com;NM=1;PH=DS;RN=9;SR=0;TI=SMTPD_---0T9a2Upm_1538067582; Received: from e19h19392.et15sqa.tbsite.net(mailfrom:yang.shi@linux.alibaba.com fp:SMTPD_---0T9a2Upm_1538067582) by smtp.aliyun-inc.com(127.0.0.1); Fri, 28 Sep 2018 00:59:49 +0800 From: Yang Shi To: mhocko@kernel.org, kirill@shutemov.name, willy@infradead.org, ldufour@linux.vnet.ibm.com, vbabka@suse.cz, akpm@linux-foundation.org Cc: yang.shi@linux.alibaba.com, linux-mm@kvack.org, linux-kernel@vger.kernel.org Subject: [v3 PATCH 1/2 -mm] mm: mremap: downgrade mmap_sem to read when shrinking Date: Fri, 28 Sep 2018 00:59:41 +0800 Message-Id: <1538067582-60038-1-git-send-email-yang.shi@linux.alibaba.com> X-Mailer: git-send-email 1.8.3.1 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: X-Virus-Scanned: ClamAV using ClamSMTP Other than munmap, mremap might be used to shrink memory mapping too. So, it may hold write mmap_sem for long time when shrinking large mapping, as what commit ("mm: mmap: zap pages with read mmap_sem in munmap") described. The mremap() will not manipulate vmas anymore after __do_munmap() call for the mapping shrink use case, so it is safe to downgrade to read mmap_sem. So, the same optimization, which downgrades mmap_sem to read for zapping pages, is also feasible and reasonable to this case. The period of holding exclusive mmap_sem for shrinking large mapping would be reduced significantly with this optimization. MREMAP_FIXED and MREMAP_MAYMOVE are more complicated to adopt this optimization since they need manipulate vmas after do_munmap(), downgrading mmap_sem may create race window. Simple mapping shrink is the low hanging fruit, and it may cover the most cases of unmap with munmap together. Acked-by: Vlastimil Babka Acked-by: Kirill A. Shutemov Cc: Michal Hocko Cc: Matthew Wilcox Cc: Laurent Dufour Cc: Andrew Morton Signed-off-by: Yang Shi --- v3: Fixed the comments from Vlastimil and Kirill. And, added their Acked-by. Thanks. v2: Rephrase the commit log per Michal include/linux/mm.h | 2 ++ mm/mmap.c | 4 ++-- mm/mremap.c | 17 +++++++++++++---- 3 files changed, 17 insertions(+), 6 deletions(-) diff --git a/include/linux/mm.h b/include/linux/mm.h index a61ebe8..3028028 100644 --- a/include/linux/mm.h +++ b/include/linux/mm.h @@ -2286,6 +2286,8 @@ extern unsigned long do_mmap(struct file *file, unsigned long addr, unsigned long len, unsigned long prot, unsigned long flags, vm_flags_t vm_flags, unsigned long pgoff, unsigned long *populate, struct list_head *uf); +extern int __do_munmap(struct mm_struct *, unsigned long, size_t, + struct list_head *uf, bool downgrade); extern int do_munmap(struct mm_struct *, unsigned long, size_t, struct list_head *uf); diff --git a/mm/mmap.c b/mm/mmap.c index 847a17d..017bcfa 100644 --- a/mm/mmap.c +++ b/mm/mmap.c @@ -2687,8 +2687,8 @@ int split_vma(struct mm_struct *mm, struct vm_area_struct *vma, * work. This now handles partial unmappings. * Jeremy Fitzhardinge */ -static int __do_munmap(struct mm_struct *mm, unsigned long start, size_t len, - struct list_head *uf, bool downgrade) +int __do_munmap(struct mm_struct *mm, unsigned long start, size_t len, + struct list_head *uf, bool downgrade) { unsigned long end; struct vm_area_struct *vma, *prev, *last; diff --git a/mm/mremap.c b/mm/mremap.c index 5c2e185..3524d16 100644 --- a/mm/mremap.c +++ b/mm/mremap.c @@ -525,6 +525,7 @@ static int vma_expandable(struct vm_area_struct *vma, unsigned long delta) unsigned long ret = -EINVAL; unsigned long charged = 0; bool locked = false; + bool downgraded = false; struct vm_userfaultfd_ctx uf = NULL_VM_UFFD_CTX; LIST_HEAD(uf_unmap_early); LIST_HEAD(uf_unmap); @@ -561,12 +562,17 @@ static int vma_expandable(struct vm_area_struct *vma, unsigned long delta) /* * Always allow a shrinking remap: that just unmaps * the unnecessary pages.. - * do_munmap does all the needed commit accounting + * __do_munmap does all the needed commit accounting, and + * downgrade mmap_sem to read. */ if (old_len >= new_len) { - ret = do_munmap(mm, addr+new_len, old_len - new_len, &uf_unmap); - if (ret && old_len != new_len) + ret = __do_munmap(mm, addr+new_len, old_len - new_len, + &uf_unmap, true); + if (ret < 0 && old_len != new_len) goto out; + /* Returning 1 indicates mmap_sem is downgraded to read. */ + else if (ret == 1) + downgraded = true; ret = addr; goto out; } @@ -631,7 +637,10 @@ static int vma_expandable(struct vm_area_struct *vma, unsigned long delta) vm_unacct_memory(charged); locked = 0; } - up_write(¤t->mm->mmap_sem); + if (downgraded) + up_read(¤t->mm->mmap_sem); + else + up_write(¤t->mm->mmap_sem); if (locked && new_len > old_len) mm_populate(new_addr + old_len, new_len - old_len); userfaultfd_unmap_complete(mm, &uf_unmap_early);