From patchwork Mon Feb 24 20:30:46 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Michel Lespinasse X-Patchwork-Id: 11401509 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id A05B417D5 for ; Mon, 24 Feb 2020 20:31:39 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 630A420675 for ; Mon, 24 Feb 2020 20:31:39 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="dEqTSNeO" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 630A420675 Authentication-Results: mail.kernel.org; dmarc=fail (p=reject dis=none) header.from=google.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 76BF36B0096; Mon, 24 Feb 2020 15:31:33 -0500 (EST) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 71B366B0099; Mon, 24 Feb 2020 15:31:33 -0500 (EST) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 609C96B009A; Mon, 24 Feb 2020 15:31:33 -0500 (EST) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0128.hostedemail.com [216.40.44.128]) by kanga.kvack.org (Postfix) with ESMTP id 43BAA6B0096 for ; Mon, 24 Feb 2020 15:31:33 -0500 (EST) Received: from smtpin02.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay03.hostedemail.com (Postfix) with ESMTP id EFA7E824556B for ; Mon, 24 Feb 2020 20:31:32 +0000 (UTC) X-FDA: 76526166066.02.class30_2a95174b1d035 X-Spam-Summary: 2,0,0,860d9cd34bf4d8b8,d41d8cd98f00b204,3ozjuxgykcfigu54y708805y.w86527eh-664fuw4.8b0@flex--walken.bounces.google.com,,RULES_HIT:41:152:355:379:541:800:960:968:973:988:989:1260:1277:1313:1314:1345:1359:1437:1516:1518:1535:1542:1593:1594:1711:1730:1747:1777:1792:2194:2199:2393:2553:2559:2562:2693:3138:3139:3140:3141:3142:3152:3354:3865:3866:3867:3870:3871:4250:4321:4605:5007:6120:6261:6653:7875:7903:9969:10004:10400:11026:11473:11658:11914:12043:12048:12291:12296:12297:12438:12555:12895:12986:13141:13230:14096:14097:14181:14659:14721:21080:21444:21451:21627:21990:30003:30045:30054:30090,0,RBL:209.85.219.202:@flex--walken.bounces.google.com:.lbl8.mailshell.net-66.100.201.100 62.18.0.100,CacheIP:none,Bayesian:0.5,0.5,0.5,Netcheck:none,DomainCache:0,MSF:not bulk,SPF:fp,MSBL:0,DNSBL:neutral,Custom_rules:0:0:0,LFtime:34,LUA_SUMMARY:none X-HE-Tag: class30_2a95174b1d035 X-Filterd-Recvd-Size: 5764 Received: from mail-yb1-f202.google.com (mail-yb1-f202.google.com [209.85.219.202]) by imf19.hostedemail.com (Postfix) with ESMTP for ; Mon, 24 Feb 2020 20:31:32 +0000 (UTC) Received: by mail-yb1-f202.google.com with SMTP id s90so7994069ybi.2 for ; Mon, 24 Feb 2020 12:31:32 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=2vFMjav472XbjxumaizBQebkT0GJv9TsUvPloFyV8iU=; b=dEqTSNeOU42oWwiZuoySusUUssznri3PJLkkZsdUHzdyuewYEdNsusFzLYw162QBOI otyKseKInK8QNiSno2nf43TJmt8of/aFGjgoV4KU2M1QCXWm42wt34FxLtaMrh3ITV/9 3VYXkOKJh24WBM0ciN9JwRcJ4TKvrjzQT5c+h19nSIn42YfyXd1Ogpw8TqVUuGTmWaP+ 4j8oeLRTpLSBWFlyE3TeDGlD6JOMCg/j3+POzSfYf/dVJ5KVz4ZiHHhTQj7vp0w5ThZo CVKys5e5hG1smLlgtpUs1PzZvItBAsCzXS9ySdFA/pti+qjzNfXnIyCZ+gy7t0r8QsTE 4tFQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=2vFMjav472XbjxumaizBQebkT0GJv9TsUvPloFyV8iU=; b=NOHJO5qkfzNHrPMVGsJABW4+1rphgqh8LUzYtvtRWufT4EoVpWp9fgVJrXVLEVt4X/ ypilsWwBhzbOQjQpuiZBK0anAvZqcKllOX1xaJrF/PKruCfU9iivyx53c71SXihU6PCe JTG2t0jj4dBuPbgkiKdwoVRNFfsLG2slhLqdsUdxoGRrcE+bBUc2Tk9GUxKDFo36WPOz YATsrHy4AwC9vtNUjEg8YdWi2VvVPwoo4n5cIYADamFukDQ5EObqFXL0vMv9zjcBKIbj WZZW4uqalWensfuFz/X61/58n7eAl3Yjz/JYO7smA3MOKsvL0i49tzimgM9WbI/StU8h rZpQ== X-Gm-Message-State: APjAAAX0aB+GwqpONX10X26/mX9ytjsthN4mB/GQ9UDdwKQIXAc8OD+8 y330HUcpZ/I1cIHsbSz5AW9L4OaAlH0= X-Google-Smtp-Source: APXvYqwAgTVxVPVChl7z3o4UlLHDiCOgf054azS9Nrx41XD8xNVnuixfZ2NGs385urOas3AYTASHwT18t1s= X-Received: by 2002:a81:3b57:: with SMTP id i84mr40377172ywa.356.1582576291778; Mon, 24 Feb 2020 12:31:31 -0800 (PST) Date: Mon, 24 Feb 2020 12:30:46 -0800 In-Reply-To: <20200224203057.162467-1-walken@google.com> Message-Id: <20200224203057.162467-14-walken@google.com> Mime-Version: 1.0 References: <20200224203057.162467-1-walken@google.com> X-Mailer: git-send-email 2.25.0.265.gbab2e86ba0-goog Subject: [RFC PATCH 13/24] mm/memory: add prepare_mm_fault() function From: Michel Lespinasse To: Peter Zijlstra , Andrew Morton , Laurent Dufour , Vlastimil Babka , Matthew Wilcox , "Liam R . Howlett" , Jerome Glisse , Davidlohr Bueso , David Rientjes Cc: linux-mm , Michel Lespinasse X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Add a prepare_mm_fault() function, which may allocate an anon_vma if required for the incoming fault. This is because the anon_vma must be allocated in the vma of record, while in the range locked case, the fault will operate on a pseudo-vma. Signed-off-by: Michel Lespinasse --- include/linux/mm.h | 14 ++++++++++++++ mm/memory.c | 26 ++++++++++++++++++++++++++ 2 files changed, 40 insertions(+) diff --git include/linux/mm.h include/linux/mm.h index 1b6b022064b4..43b7121ae005 100644 --- include/linux/mm.h +++ include/linux/mm.h @@ -1460,6 +1460,15 @@ int generic_error_remove_page(struct address_space *mapping, struct page *page); int invalidate_inode_page(struct page *page); #ifdef CONFIG_MMU +extern vm_fault_t __prepare_mm_fault(struct vm_area_struct *vma, + unsigned int flags); +static inline vm_fault_t prepare_mm_fault(struct vm_area_struct *vma, + unsigned int flags) +{ + if (likely(vma->anon_vma)) + return 0; + return __prepare_mm_fault(vma, flags); +} extern vm_fault_t handle_mm_fault_range(struct vm_area_struct *vma, unsigned long address, unsigned int flags, struct mm_lock_range *range); @@ -1477,6 +1486,11 @@ void unmap_mapping_pages(struct address_space *mapping, void unmap_mapping_range(struct address_space *mapping, loff_t const holebegin, loff_t const holelen, int even_cows); #else +static inline vm_fault_t prepare_mm_fault(struct vm_area_struct *vma, + unsigned int flags) +{ + return 0; +} static inline vm_fault_t handle_mm_fault(struct vm_area_struct *vma, unsigned long address, unsigned int flags) { diff --git mm/memory.c mm/memory.c index 3da4ae504957..9d0b761833fe 100644 --- mm/memory.c +++ mm/memory.c @@ -4129,6 +4129,32 @@ static vm_fault_t __handle_mm_fault(struct vm_area_struct *vma, return handle_pte_fault(&vmf); } +vm_fault_t __prepare_mm_fault(struct vm_area_struct *vma, unsigned int flags) +{ + vm_fault_t ret = 0; + + if (vma_is_anonymous(vma) || + ((flags & FAULT_FLAG_WRITE) && !(vma->vm_flags & VM_SHARED)) || + (is_vm_hugetlb_page(vma) && !(vma->vm_flags & VM_MAYSHARE))) { + if (flags & FAULT_FLAG_USER) + mem_cgroup_enter_user_fault(); + if (unlikely(__anon_vma_prepare(vma))) + ret = VM_FAULT_OOM; + if (flags & FAULT_FLAG_USER) { + mem_cgroup_exit_user_fault(); + /* + * The task may have entered a memcg OOM situation but + * if the allocation error was handled gracefully (no + * VM_FAULT_OOM), there is no need to kill anything. + * Just clean up the OOM state peacefully. + */ + if (task_in_memcg_oom(current) && !(ret & VM_FAULT_OOM)) + mem_cgroup_oom_synchronize(false); + } + } + return ret; +} + /* * By the time we get here, we already hold the mm semaphore *