From patchwork Wed Nov 21 06:21:00 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Peter Xu X-Patchwork-Id: 10691993 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id A80AA13BF for ; Wed, 21 Nov 2018 06:21:38 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 853252B88B for ; Wed, 21 Nov 2018 06:21:38 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 834682B88D; Wed, 21 Nov 2018 06:21:38 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-2.9 required=2.0 tests=BAYES_00,MAILING_LIST_MULTI, RCVD_IN_DNSWL_NONE autolearn=ham version=3.3.1 Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 0A6BC2B891 for ; Wed, 21 Nov 2018 06:21:28 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 005F66B2485; Wed, 21 Nov 2018 01:21:28 -0500 (EST) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id EF8936B2486; Wed, 21 Nov 2018 01:21:27 -0500 (EST) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id DC0206B2487; Wed, 21 Nov 2018 01:21:27 -0500 (EST) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from mail-qt1-f199.google.com (mail-qt1-f199.google.com [209.85.160.199]) by kanga.kvack.org (Postfix) with ESMTP id ADFA56B2485 for ; Wed, 21 Nov 2018 01:21:27 -0500 (EST) Received: by mail-qt1-f199.google.com with SMTP id t42so2465786qth.12 for ; Tue, 20 Nov 2018 22:21:27 -0800 (PST) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-original-authentication-results:x-gm-message-state:from:to:cc :subject:date:message-id:in-reply-to:references; bh=XwYSKB2sIwOJaYf9H5Pu5gmqI1iBHqd91Nz70b/uAtY=; b=itprB/Zv4jdoZQimztJ33OaePwarespi+ZMHF1hHfJsHtTPnACYN0iyGU14BurUFd5 dY6CCfJLwDq1Z4IgVFkHTgbWEFthm1R9h043bKVvjStZEX2o7593NoMCJMjFVBQd65C2 7s9glc7ByILigW/KrjvucC22rVI6wASIWP/YAM0ZItUTESw4LKrbbPXmvEY3os1mb1lI BjmuwLmkF5wc8j50yaW+LVX4Ik0uAOcV1TBqSHuhduTevH291n3RocVS9wyjMfEIC7U0 FlWZ5VPrbeqbqsFdTGJ+FkTn1JryPDG1z9ezAA7WDRlOmXUrdQBcb9iyrucrhbkwphYn N7bw== X-Original-Authentication-Results: mx.google.com; spf=pass (google.com: domain of peterx@redhat.com designates 209.132.183.28 as permitted sender) smtp.mailfrom=peterx@redhat.com; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=redhat.com X-Gm-Message-State: AA+aEWZ4KrUwxasEwLBNln1haoOTjzl9ZnD7jRty4tniU6BSX+6ntjHg He/d2Upz1hHh11lVRL2LjYKpiQrSy/qF0hgecqGvxcCL57GoG6PFLbDmJQehUDE5CKd6D5mobjM gzC511aRBC+fclgMyf+m7TUwr3rpoTV2cnT7eEc7LZrepxvDcb04zy0aAnjxdIAWbeA== X-Received: by 2002:a0c:8b64:: with SMTP id d36mr4641798qvc.233.1542781287456; Tue, 20 Nov 2018 22:21:27 -0800 (PST) X-Google-Smtp-Source: AFSGD/VxG4Dm0wY9Mr7T/1ivCN0y0rfwABDkqauCwdu3FJBP9rYnVkDVw0SuHMdiHpibtEQT91Vw X-Received: by 2002:a0c:8b64:: with SMTP id d36mr4641766qvc.233.1542781286605; Tue, 20 Nov 2018 22:21:26 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1542781286; cv=none; d=google.com; s=arc-20160816; b=0hNmgnq9Zbe+ku+aZuhOkGV0OW+sNdeYw9tjIYjOq4/TxjEnakZ1/hWlgcmYlVa1Ou BfCTdQ/dBojvV8pn7JksSLK28L2hhZKWmpDQ2mYrLX5FcYKbUj+7znOKymdZC1D+NwXT KjnZM9ZpgFaieE1RUUMgglKaA3Ap/PhisP1KRKXbllkRfwIpVjy12M0jVCDomeMHGMlj rx8Ao68ZhM6qjaEQDJiLdErkH9z1dWhxtC0eF9b4U2z1CiQariKDVvtP8KytDWEhOXfn 7rSjMPh86BrzrMwzmPrfb4q8WIUPui18kAhFvQAUCF7uPUQjie2fYtecYH8YJxbO3SPt d9hg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=references:in-reply-to:message-id:date:subject:cc:to:from; bh=XwYSKB2sIwOJaYf9H5Pu5gmqI1iBHqd91Nz70b/uAtY=; b=kCP2vvvLb0L0e2akOhatM929iI2W80WrCSv0m23ZQ96/wYbtBKcWm8nHRl0jTQPTiY RkBCcUQ3GVXBVRUoXaaFFOYGUbM7P28uuRFQUVgcHwjE8r/S7RqMg4r2erxLUcxcsYHp Vaydg5aNhxXGEzVXJMRqaFhUpUnMlHxEiehYB27+hTYDfVmpD8ksLIbMkE2fNGR6EnLP daBxB+Zlk2omODskMl1D+YgScC9VR1otLGlgrbognFvQFCERMNw+22iwjWJUkNxOc5BV vjUANbDavjiFC3GcMAzTyOp8s2uhtqiR6hsPq+05z5mHhknXoUVpUVc33pNFWTLPNqNS BCnw== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of peterx@redhat.com designates 209.132.183.28 as permitted sender) smtp.mailfrom=peterx@redhat.com; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=redhat.com Received: from mx1.redhat.com (mx1.redhat.com. [209.132.183.28]) by mx.google.com with ESMTPS id r22si2517543qtp.273.2018.11.20.22.21.26 for (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Tue, 20 Nov 2018 22:21:26 -0800 (PST) Received-SPF: pass (google.com: domain of peterx@redhat.com designates 209.132.183.28 as permitted sender) client-ip=209.132.183.28; Authentication-Results: mx.google.com; spf=pass (google.com: domain of peterx@redhat.com designates 209.132.183.28 as permitted sender) smtp.mailfrom=peterx@redhat.com; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=redhat.com Received: from smtp.corp.redhat.com (int-mx01.intmail.prod.int.phx2.redhat.com [10.5.11.11]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mx1.redhat.com (Postfix) with ESMTPS id 983A137F41; Wed, 21 Nov 2018 06:21:25 +0000 (UTC) Received: from xz-x1.nay.redhat.com (dhcp-14-128.nay.redhat.com [10.66.14.128]) by smtp.corp.redhat.com (Postfix) with ESMTP id C21A9600C5; Wed, 21 Nov 2018 06:21:19 +0000 (UTC) From: Peter Xu To: linux-kernel@vger.kernel.org Cc: Keith Busch , Jerome Glisse , peterx@redhat.com, Dan Williams , linux-mm@kvack.org, Matthew Wilcox , Al Viro , Andrea Arcangeli , Huang Ying , Mike Kravetz , Mike Rapoport , Linus Torvalds , "Michael S. Tsirkin" , "Kirill A . Shutemov" , Michal Hocko , Vlastimil Babka , Pavel Tatashin , Andrew Morton Subject: [PATCH RFC v3 1/4] mm: gup: rename "nonblocking" to "locked" where proper Date: Wed, 21 Nov 2018 14:21:00 +0800 Message-Id: <20181121062103.18835-2-peterx@redhat.com> In-Reply-To: <20181121062103.18835-1-peterx@redhat.com> References: <20181121062103.18835-1-peterx@redhat.com> X-Scanned-By: MIMEDefang 2.79 on 10.5.11.11 X-Greylist: Sender IP whitelisted, not delayed by milter-greylist-4.5.16 (mx1.redhat.com [10.5.110.29]); Wed, 21 Nov 2018 06:21:25 +0000 (UTC) X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: X-Virus-Scanned: ClamAV using ClamSMTP There's plenty of places around __get_user_pages() that has a parameter "nonblocking" which does not really mean that "it won't block" (because it can really block) but instead it shows whether the mmap_sem is released by up_read() during the page fault handling mostly when VM_FAULT_RETRY is returned. We have the correct naming in e.g. get_user_pages_locked() or get_user_pages_remote() as "locked", however there're still many places that are using the "nonblocking" as name. Renaming the places to "locked" where proper to better suite the functionality of the variable. While at it, fixing up some of the comments accordingly. Signed-off-by: Peter Xu --- mm/gup.c | 44 +++++++++++++++++++++----------------------- mm/hugetlb.c | 8 ++++---- 2 files changed, 25 insertions(+), 27 deletions(-) diff --git a/mm/gup.c b/mm/gup.c index aa43620a3270..a40111c742ba 100644 --- a/mm/gup.c +++ b/mm/gup.c @@ -506,12 +506,12 @@ static int get_gate_page(struct mm_struct *mm, unsigned long address, } /* - * mmap_sem must be held on entry. If @nonblocking != NULL and - * *@flags does not include FOLL_NOWAIT, the mmap_sem may be released. - * If it is, *@nonblocking will be set to 0 and -EBUSY returned. + * mmap_sem must be held on entry. If @locked != NULL and *@flags + * does not include FOLL_NOWAIT, the mmap_sem may be released. If it + * is, *@locked will be set to 0 and -EBUSY returned. */ static int faultin_page(struct task_struct *tsk, struct vm_area_struct *vma, - unsigned long address, unsigned int *flags, int *nonblocking) + unsigned long address, unsigned int *flags, int *locked) { unsigned int fault_flags = 0; vm_fault_t ret; @@ -523,7 +523,7 @@ static int faultin_page(struct task_struct *tsk, struct vm_area_struct *vma, fault_flags |= FAULT_FLAG_WRITE; if (*flags & FOLL_REMOTE) fault_flags |= FAULT_FLAG_REMOTE; - if (nonblocking) + if (locked) fault_flags |= FAULT_FLAG_ALLOW_RETRY; if (*flags & FOLL_NOWAIT) fault_flags |= FAULT_FLAG_ALLOW_RETRY | FAULT_FLAG_RETRY_NOWAIT; @@ -549,8 +549,8 @@ static int faultin_page(struct task_struct *tsk, struct vm_area_struct *vma, } if (ret & VM_FAULT_RETRY) { - if (nonblocking && !(fault_flags & FAULT_FLAG_RETRY_NOWAIT)) - *nonblocking = 0; + if (locked && !(fault_flags & FAULT_FLAG_RETRY_NOWAIT)) + *locked = 0; return -EBUSY; } @@ -627,7 +627,7 @@ static int check_vma_flags(struct vm_area_struct *vma, unsigned long gup_flags) * only intends to ensure the pages are faulted in. * @vmas: array of pointers to vmas corresponding to each page. * Or NULL if the caller does not require them. - * @nonblocking: whether waiting for disk IO or mmap_sem contention + * @locked: whether we're still with the mmap_sem held * * Returns number of pages pinned. This may be fewer than the number * requested. If nr_pages is 0 or negative, returns 0. If no pages @@ -656,13 +656,11 @@ static int check_vma_flags(struct vm_area_struct *vma, unsigned long gup_flags) * appropriate) must be called after the page is finished with, and * before put_page is called. * - * If @nonblocking != NULL, __get_user_pages will not wait for disk IO - * or mmap_sem contention, and if waiting is needed to pin all pages, - * *@nonblocking will be set to 0. Further, if @gup_flags does not - * include FOLL_NOWAIT, the mmap_sem will be released via up_read() in - * this case. + * If @locked != NULL, *@locked will be set to 0 when mmap_sem is + * released by an up_read(). That can happen if @gup_flags does not + * has FOLL_NOWAIT. * - * A caller using such a combination of @nonblocking and @gup_flags + * A caller using such a combination of @locked and @gup_flags * must therefore hold the mmap_sem for reading only, and recognize * when it's been released. Otherwise, it must be held for either * reading or writing and will not be released. @@ -674,7 +672,7 @@ static int check_vma_flags(struct vm_area_struct *vma, unsigned long gup_flags) static long __get_user_pages(struct task_struct *tsk, struct mm_struct *mm, unsigned long start, unsigned long nr_pages, unsigned int gup_flags, struct page **pages, - struct vm_area_struct **vmas, int *nonblocking) + struct vm_area_struct **vmas, int *locked) { long ret = 0, i = 0; struct vm_area_struct *vma = NULL; @@ -719,7 +717,7 @@ static long __get_user_pages(struct task_struct *tsk, struct mm_struct *mm, if (is_vm_hugetlb_page(vma)) { i = follow_hugetlb_page(mm, vma, pages, vmas, &start, &nr_pages, i, - gup_flags, nonblocking); + gup_flags, locked); continue; } } @@ -737,7 +735,7 @@ static long __get_user_pages(struct task_struct *tsk, struct mm_struct *mm, page = follow_page_mask(vma, start, foll_flags, &ctx); if (!page) { ret = faultin_page(tsk, vma, start, &foll_flags, - nonblocking); + locked); switch (ret) { case 0: goto retry; @@ -1196,7 +1194,7 @@ EXPORT_SYMBOL(get_user_pages_longterm); * @vma: target vma * @start: start address * @end: end address - * @nonblocking: + * @locked: whether the mmap_sem is still held * * This takes care of mlocking the pages too if VM_LOCKED is set. * @@ -1204,14 +1202,14 @@ EXPORT_SYMBOL(get_user_pages_longterm); * * vma->vm_mm->mmap_sem must be held. * - * If @nonblocking is NULL, it may be held for read or write and will + * If @locked is NULL, it may be held for read or write and will * be unperturbed. * - * If @nonblocking is non-NULL, it must held for read only and may be - * released. If it's released, *@nonblocking will be set to 0. + * If @locked is non-NULL, it must held for read only and may be + * released. If it's released, *@locked will be set to 0. */ long populate_vma_page_range(struct vm_area_struct *vma, - unsigned long start, unsigned long end, int *nonblocking) + unsigned long start, unsigned long end, int *locked) { struct mm_struct *mm = vma->vm_mm; unsigned long nr_pages = (end - start) / PAGE_SIZE; @@ -1246,7 +1244,7 @@ long populate_vma_page_range(struct vm_area_struct *vma, * not result in a stack expansion that recurses back here. */ return __get_user_pages(current, mm, start, nr_pages, gup_flags, - NULL, NULL, nonblocking); + NULL, NULL, locked); } /* diff --git a/mm/hugetlb.c b/mm/hugetlb.c index 7f2a28ab46d5..a131b9e750d3 100644 --- a/mm/hugetlb.c +++ b/mm/hugetlb.c @@ -4181,7 +4181,7 @@ int hugetlb_mcopy_atomic_pte(struct mm_struct *dst_mm, long follow_hugetlb_page(struct mm_struct *mm, struct vm_area_struct *vma, struct page **pages, struct vm_area_struct **vmas, unsigned long *position, unsigned long *nr_pages, - long i, unsigned int flags, int *nonblocking) + long i, unsigned int flags, int *locked) { unsigned long pfn_offset; unsigned long vaddr = *position; @@ -4252,7 +4252,7 @@ long follow_hugetlb_page(struct mm_struct *mm, struct vm_area_struct *vma, spin_unlock(ptl); if (flags & FOLL_WRITE) fault_flags |= FAULT_FLAG_WRITE; - if (nonblocking) + if (locked) fault_flags |= FAULT_FLAG_ALLOW_RETRY; if (flags & FOLL_NOWAIT) fault_flags |= FAULT_FLAG_ALLOW_RETRY | @@ -4269,8 +4269,8 @@ long follow_hugetlb_page(struct mm_struct *mm, struct vm_area_struct *vma, break; } if (ret & VM_FAULT_RETRY) { - if (nonblocking) - *nonblocking = 0; + if (locked) + *locked = 0; *nr_pages = 0; /* * VM_FAULT_RETRY must not return an