From patchwork Fri Jun 29 22:39:43 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Yang Shi X-Patchwork-Id: 10497711 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id DC0A96022E for ; Fri, 29 Jun 2018 22:40:54 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id CB5CA29738 for ; Fri, 29 Jun 2018 22:40:54 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id BF6CD2973E; Fri, 29 Jun 2018 22:40:54 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-2.9 required=2.0 tests=BAYES_00, MAILING_LIST_MULTI, RCVD_IN_DNSWL_NONE,UNPARSEABLE_RELAY autolearn=ham version=3.3.1 Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 86F9C29738 for ; Fri, 29 Jun 2018 22:40:53 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 931BD6B000D; Fri, 29 Jun 2018 18:40:48 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 8E2126B000E; Fri, 29 Jun 2018 18:40:48 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 7AA516B0266; Fri, 29 Jun 2018 18:40:48 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from mail-pf0-f199.google.com (mail-pf0-f199.google.com [209.85.192.199]) by kanga.kvack.org (Postfix) with ESMTP id 2759A6B000E for ; Fri, 29 Jun 2018 18:40:48 -0400 (EDT) Received: by mail-pf0-f199.google.com with SMTP id e3-v6so5150790pfn.13 for ; Fri, 29 Jun 2018 15:40:48 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-original-authentication-results:x-gm-message-state:from:to:cc :subject:date:message-id:in-reply-to:references; bh=8yYROjdPkYD8dJTWlsX0uG8lLirwXykj27i+jIRo39M=; b=fWTDo8JGzqZAMtIBBCnQaPfbfoQ8Sj0qm/zc9KaFLE1HGXdB+bkeRlEPzTfUCkErrQ fexq41hUdC9U/ZKieYyDRxEQ5GYAFcJUsnvVWvwx+UsJBOSriIJit9F+4eoR0jtjuuUm 4YuCDt1mnlGxecy1oc2tMSxF9qVxCQKvP6KIdcngnsy8ig+Ew293bGarOiOSlomwTkjy 3SuzupT3pfcXE5slYaDmETJkRr8TFgFRslRWQZJcDaJF7anCNtY8QVwDvx8MUkT37mPN A9y8tPQqegR5hql8IYuG87fvZZzTdVqe3TR2R2SZVvQ0x2lvpE4sW064pPA2xVOgeVRm GuPg== X-Original-Authentication-Results: mx.google.com; spf=pass (google.com: domain of yang.shi@linux.alibaba.com designates 47.88.44.38 as permitted sender) smtp.mailfrom=yang.shi@linux.alibaba.com; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=alibaba.com X-Gm-Message-State: APt69E3kVQmL+6XALR8oPW/yfIgR7zOLr3uvze8B5YjXar4XptX+qvPS FTKiJ7evJC/wCes6NvKvJiFw/bkUqaimvKe109XqZ9y7KmuprAy4lFyklmrDa3M9nT9p1Ytg1Um rkZHSefyvj20r4qLwL4G+l0PMj+FJ53vfbptY/XV3V37S05CZh0ttMnNSg1LtWI8o0g== X-Received: by 2002:a17:902:8c84:: with SMTP id t4-v6mr17030445plo.100.1530312047841; Fri, 29 Jun 2018 15:40:47 -0700 (PDT) X-Google-Smtp-Source: ADUXVKLhTY2kzWcwPlbDDjhzoAwjmZun4zi1nO73yu7TPU6UR9wDNaianmthLxc5efIVPkGDbQc0 X-Received: by 2002:a17:902:8c84:: with SMTP id t4-v6mr17030407plo.100.1530312046615; Fri, 29 Jun 2018 15:40:46 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1530312046; cv=none; d=google.com; s=arc-20160816; b=S+j5aZ6R+tysJiBXLKAvyP6NzZiuYG74S7NGftGZZ0RqGuM7XLwOtyRBUnMkI7WbdY tOCbpqG7wsqS/j9eK/r+p3Hsir4vC+AO2g3tPtXyuDVApiDm+vmcJ9mnUM+jySrZOjXy EszH9TbHTCF9IDHFUTuh8ps9WplC9kSSxIjwYVvkGBPwDatYYD57pYW/QnFYMC1FC/tV k+rXSW4JvtMdkM7CnHD5TCdSJ/N4jQ4pM7kJlVJS7U+ljhdb7Fxs4x2Nmhm/iCVR55Gy qqFN3O6F6FbJMowRYGxREotizM+czLkYF/LSsOdeVcoa0s0q2zJQ1nEtu2pWQvfiQFS4 vNEg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=references:in-reply-to:message-id:date:subject:cc:to:from :arc-authentication-results; bh=8yYROjdPkYD8dJTWlsX0uG8lLirwXykj27i+jIRo39M=; b=X9Iv+KkjMv8lYfS44cpuRR4h65IoOqBDl4vJOZc+8Dd7JlX1KANLz5/8Qx5lbwdDG7 jrEag+RaAbRsb0Jq1wLB74gspiPqUF1WrImkkbjc9L+aAPOJBnyVpIct7Im+V6c06D4z 65ainlmZTWkZeE1lVyDrBD6bydwYd359X1rdFIxzfUjqbgDxPsJa/7fvyCEjzpUApqzL Y5XiE6PfJKsY0k+tqndQl9HMJcupaaOZAV+SVAR2HpstHDZvpmTVwzpzd35uEdp4vYxA VuEzgx0zUBgOgcX55qUe+vjMiPOgoagIv8+lJaqcSKrm/i8ZUJnixc61XxEdH+Iudyt1 7Fkw== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of yang.shi@linux.alibaba.com designates 47.88.44.38 as permitted sender) smtp.mailfrom=yang.shi@linux.alibaba.com; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=alibaba.com Received: from out4438.biz.mail.alibaba.com (out4438.biz.mail.alibaba.com. [47.88.44.38]) by mx.google.com with ESMTPS id m5-v6si8495772pgp.269.2018.06.29.15.40.45 for (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Fri, 29 Jun 2018 15:40:46 -0700 (PDT) Received-SPF: pass (google.com: domain of yang.shi@linux.alibaba.com designates 47.88.44.38 as permitted sender) client-ip=47.88.44.38; Authentication-Results: mx.google.com; spf=pass (google.com: domain of yang.shi@linux.alibaba.com designates 47.88.44.38 as permitted sender) smtp.mailfrom=yang.shi@linux.alibaba.com; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=alibaba.com X-Alimail-AntiSpam: AC=PASS; BC=-1|-1; BR=01201311R311e4; CH=green; FP=0|-1|-1|-1|0|-1|-1|-1; HT=e01e01353; MF=yang.shi@linux.alibaba.com; NM=1; PH=DS; RN=16; SR=0; TI=SMTPD_---0T3dXmHm_1530312021; Received: from e19h19392.et15sqa.tbsite.net(mailfrom:yang.shi@linux.alibaba.com fp:SMTPD_---0T3dXmHm_1530312021) by smtp.aliyun-inc.com(127.0.0.1); Sat, 30 Jun 2018 06:40:30 +0800 From: Yang Shi To: mhocko@kernel.org, willy@infradead.org, ldufour@linux.vnet.ibm.com, akpm@linux-foundation.org, peterz@infradead.org, mingo@redhat.com, acme@kernel.org, alexander.shishkin@linux.intel.com, jolsa@redhat.com, namhyung@kernel.org, tglx@linutronix.de, hpa@zytor.com Cc: yang.shi@linux.alibaba.com, linux-mm@kvack.org, x86@kernel.org, linux-kernel@vger.kernel.org Subject: [RFC v3 PATCH 3/5] mm: refactor do_munmap() to extract the common part Date: Sat, 30 Jun 2018 06:39:43 +0800 Message-Id: <1530311985-31251-4-git-send-email-yang.shi@linux.alibaba.com> X-Mailer: git-send-email 1.8.3.1 In-Reply-To: <1530311985-31251-1-git-send-email-yang.shi@linux.alibaba.com> References: <1530311985-31251-1-git-send-email-yang.shi@linux.alibaba.com> X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: X-Virus-Scanned: ClamAV using ClamSMTP Introduces two new helper functions: * munmap_addr_sanity() * munmap_lookup_vma() They will be used by do_munmap() and the new do_munmap with zapping large mapping early in the later patch. There is no functional change, just code refactor. Signed-off-by: Yang Shi --- mm/mmap.c | 107 ++++++++++++++++++++++++++++++++++++++++++-------------------- 1 file changed, 72 insertions(+), 35 deletions(-) diff --git a/mm/mmap.c b/mm/mmap.c index d1eb87e..87dcf83 100644 --- a/mm/mmap.c +++ b/mm/mmap.c @@ -2686,34 +2686,45 @@ int split_vma(struct mm_struct *mm, struct vm_area_struct *vma, return __split_vma(mm, vma, addr, new_below); } -/* Munmap is split into 2 main parts -- this part which finds - * what needs doing, and the areas themselves, which do the - * work. This now handles partial unmappings. - * Jeremy Fitzhardinge - */ -int do_munmap(struct mm_struct *mm, unsigned long start, size_t len, - struct list_head *uf) +static inline bool munmap_addr_sanity(unsigned long start, size_t len) { - unsigned long end; - struct vm_area_struct *vma, *prev, *last; + if ((offset_in_page(start)) || start > TASK_SIZE || len > TASK_SIZE - start) + return false; - if ((offset_in_page(start)) || start > TASK_SIZE || len > TASK_SIZE-start) - return -EINVAL; + if (PAGE_ALIGN(len) == 0) + return false; - len = PAGE_ALIGN(len); - if (len == 0) - return -EINVAL; + return true; +} + +/* + * munmap_lookup_vma: find the first overlap vma and split overlap vmas. + * @mm: mm_struct + * @vma: the first overlapping vma + * @prev: vma's prev + * @start: start address + * @end: end address + * + * returns 1 if successful, 0 or errno otherwise + */ +static int munmap_lookup_vma(struct mm_struct *mm, struct vm_area_struct **vma, + struct vm_area_struct **prev, unsigned long start, + unsigned long end) +{ + struct vm_area_struct *tmp, *last; + int ret; /* Find the first overlapping VMA */ - vma = find_vma(mm, start); - if (!vma) + tmp = find_vma(mm, start); + if (!tmp) return 0; - prev = vma->vm_prev; - /* we have start < vma->vm_end */ + + *prev = tmp->vm_prev; + + /* we have start < vma->vm_end */ /* if it doesn't overlap, we have nothing.. */ - end = start + len; - if (vma->vm_start >= end) + if (tmp->vm_start >= end) return 0; /* @@ -2723,31 +2734,57 @@ int do_munmap(struct mm_struct *mm, unsigned long start, size_t len, * unmapped vm_area_struct will remain in use: so lower split_vma * places tmp vma above, and higher split_vma places tmp vma below. */ - if (start > vma->vm_start) { - int error; - + if (start > tmp->vm_start) { /* * Make sure that map_count on return from munmap() will * not exceed its limit; but let map_count go just above * its limit temporarily, to help free resources as expected. */ - if (end < vma->vm_end && mm->map_count >= sysctl_max_map_count) + if (end < tmp->vm_end && + mm->map_count > sysctl_max_map_count) return -ENOMEM; - error = __split_vma(mm, vma, start, 0); - if (error) - return error; - prev = vma; + ret = __split_vma(mm, tmp, start, 0); + if (ret) + return ret; + *prev = tmp; } /* Does it split the last one? */ last = find_vma(mm, end); if (last && end > last->vm_start) { - int error = __split_vma(mm, last, end, 1); - if (error) - return error; + ret = __split_vma(mm, last, end, 1); + if (ret) + return ret; } - vma = prev ? prev->vm_next : mm->mmap; + + *vma = *prev ? (*prev)->vm_next : mm->mmap; + + return 1; +} + +/* Munmap is split into 2 main parts -- this part which finds + * what needs doing, and the areas themselves, which do the + * work. This now handles partial unmappings. + * Jeremy Fitzhardinge + */ +int do_munmap(struct mm_struct *mm, unsigned long start, size_t len, + struct list_head *uf) +{ + unsigned long end; + struct vm_area_struct *vma = NULL, *prev; + int ret = 0; + + if (!munmap_addr_sanity(start, len)) + return -EINVAL; + + len = PAGE_ALIGN(len); + + end = start + len; + + ret = munmap_lookup_vma(mm, &vma, &prev, start, end); + if (ret != 1) + return ret; if (unlikely(uf)) { /* @@ -2759,9 +2796,9 @@ int do_munmap(struct mm_struct *mm, unsigned long start, size_t len, * split, despite we could. This is unlikely enough * failure that it's not worth optimizing it for. */ - int error = userfaultfd_unmap_prep(vma, start, end, uf); - if (error) - return error; + ret = userfaultfd_unmap_prep(vma, start, end, uf); + if (ret) + return ret; } /*