From patchwork Mon Feb 24 20:30:40 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Michel Lespinasse X-Patchwork-Id: 11401497 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 4D2CF14BC for ; Mon, 24 Feb 2020 20:31:24 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 1A36820675 for ; Mon, 24 Feb 2020 20:31:24 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="l8lxkMJ9" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 1A36820675 Authentication-Results: mail.kernel.org; dmarc=fail (p=reject dis=none) header.from=google.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id AEFC96B0080; Mon, 24 Feb 2020 15:31:18 -0500 (EST) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id A9DD66B0081; Mon, 24 Feb 2020 15:31:18 -0500 (EST) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 9652B6B0082; Mon, 24 Feb 2020 15:31:18 -0500 (EST) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0251.hostedemail.com [216.40.44.251]) by kanga.kvack.org (Postfix) with ESMTP id 6F7086B0080 for ; Mon, 24 Feb 2020 15:31:18 -0500 (EST) Received: from smtpin23.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay01.hostedemail.com (Postfix) with ESMTP id 2EA9E180AD802 for ; Mon, 24 Feb 2020 20:31:18 +0000 (UTC) X-FDA: 76526165436.23.blow20_286decab6d827 X-Spam-Summary: 2,0,0,cd7f1b1b2a172384,d41d8cd98f00b204,3ldjuxgykcem1fqpjslttlqj.htrqnsz2-rrp0fhp.twl@flex--walken.bounces.google.com,,RULES_HIT:41:152:355:379:541:800:960:973:988:989:1260:1277:1313:1314:1345:1359:1437:1516:1518:1534:1542:1593:1594:1711:1730:1747:1777:1792:2393:2559:2562:3138:3139:3140:3141:3142:3152:3353:3865:3867:3871:3872:3874:4321:4605:5007:6119:6120:6261:6653:7875:7903:8603:9969:10004:10400:11026:11473:11658:11914:12043:12048:12296:12297:12438:12555:12895:12986:13138:13231:14096:14097:14181:14659:14721:21080:21433:21444:21451:21627:21796:21987:30003:30036:30054,0,RBL:209.85.210.202:@flex--walken.bounces.google.com:.lbl8.mailshell.net-62.18.0.100 66.100.201.100,CacheIP:none,Bayesian:0.5,0.5,0.5,Netcheck:none,DomainCache:0,MSF:not bulk,SPF:fp,MSBL:0,DNSBL:neutral,Custom_rules:0:0:0,LFtime:25,LUA_SUMMARY:none X-HE-Tag: blow20_286decab6d827 X-Filterd-Recvd-Size: 4935 Received: from mail-pf1-f202.google.com (mail-pf1-f202.google.com [209.85.210.202]) by imf34.hostedemail.com (Postfix) with ESMTP for ; Mon, 24 Feb 2020 20:31:17 +0000 (UTC) Received: by mail-pf1-f202.google.com with SMTP id x199so7348660pfc.10 for ; Mon, 24 Feb 2020 12:31:17 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=Q6XCdQXwDs9mQYAGOClcX9KZHUkTNzjbfy7/XT59c6Y=; b=l8lxkMJ9YEul1+YCm+/7eJWe4OJk3fxESplxpow86CtyHqnzZ8O92NMWiodDTxJCEM hu5rI28yXkJERukpge11ynLkkxIN8QDMpPJAkQ/PZuYr1NzQ2ox8mWAPMUa7pB5q6mll ToCF/nbqjwRv4Nhdwssl3zJ11nwsEQr3qdAdOj9wD+uA5EIX//0NREoQzVrfneYH0Cyn iHz/MdnmobFF/Kmbi1fdrUdY35NmX4SToJUQy6bxxurb6QNWxIMzNQjJvXAym4Fj2ldC NXPGZ/L6PIHlx6LtpjB5t1ZMTj7FoPX/kp2QaHXRdlzIMSfwn3Wu+or2HUUzzuAq2B8w CgRg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=Q6XCdQXwDs9mQYAGOClcX9KZHUkTNzjbfy7/XT59c6Y=; b=H0ri3Ewwi/GqftMXJp4AIFcPZ7Avez8efuW0gadxqRvI6kcesTREZOIc/ehQREe56Y vYYporWwa3wRRIuBkyiOk8fPTxav5FmPwT8jpdv4mPIkERIC1YwNzAOPsQExgIwxRASe M+xIP08V2NAbjS7JiDw5Rkr90LNb2EBvyaHHqndi4Y0+9O7SNVTbpC/d86odmeP87CTE kGO1Lm7+fwArkUk4j0o/rw2FCYXqmk4PFTgqzYZTFMof5JPFu3k0LrVZnA0uhbhEbbRj Qjhb6JtV69XIsaJZ82mwTRyS5+iGZLVAxGtYLc/XsoyR6Vl7OjBBkswB8wvYiJJtQMTc INrQ== X-Gm-Message-State: APjAAAVY0qQb4rhJAh62T4Bd77PyKTFMejHgmelb4hzcBAKIMOqIVy6t xJM4w0fglmPIwyoPuH41DReB9EsxBN4= X-Google-Smtp-Source: APXvYqxTGaj2SS1ZanZdmpqeoOvxjo7qBsuppEpY7aCo8IWU171XPUzYFSWQ0sl4gjyBXDrbnYHhtY2333g= X-Received: by 2002:a63:1044:: with SMTP id 4mr56501443pgq.412.1582576276780; Mon, 24 Feb 2020 12:31:16 -0800 (PST) Date: Mon, 24 Feb 2020 12:30:40 -0800 In-Reply-To: <20200224203057.162467-1-walken@google.com> Message-Id: <20200224203057.162467-8-walken@google.com> Mime-Version: 1.0 References: <20200224203057.162467-1-walken@google.com> X-Mailer: git-send-email 2.25.0.265.gbab2e86ba0-goog Subject: [RFC PATCH 07/24] mm/memory: add range field to struct vm_fault From: Michel Lespinasse To: Peter Zijlstra , Andrew Morton , Laurent Dufour , Vlastimil Babka , Matthew Wilcox , "Liam R . Howlett" , Jerome Glisse , Davidlohr Bueso , David Rientjes Cc: linux-mm , Michel Lespinasse X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Add a range field to struct vm_fault. This carries the range that was locked for the given fault. Faults that release the mmap_sem should pass the specified range. Signed-off-by: Michel Lespinasse --- include/linux/mm.h | 1 + mm/hugetlb.c | 1 + mm/khugepaged.c | 1 + mm/memory.c | 1 + 4 files changed, 4 insertions(+) diff --git include/linux/mm.h include/linux/mm.h index 052f423d7f67..a1c9a0aa898b 100644 --- include/linux/mm.h +++ include/linux/mm.h @@ -451,6 +451,7 @@ struct vm_fault { * page table to avoid allocation from * atomic context. */ + struct mm_lock_range *range; /* MM read lock range. */ }; /* page entry size for vm->huge_fault() */ diff --git mm/hugetlb.c mm/hugetlb.c index dd8737a94bec..662f34b6c869 100644 --- mm/hugetlb.c +++ mm/hugetlb.c @@ -3831,6 +3831,7 @@ static vm_fault_t hugetlb_no_page(struct mm_struct *mm, .vma = vma, .address = haddr, .flags = flags, + .range = mm_coarse_lock_range(), /* * Hard to debug if it ends up being * used by a callee that assumes diff --git mm/khugepaged.c mm/khugepaged.c index 7ee8ae64824b..a7807bb0d631 100644 --- mm/khugepaged.c +++ mm/khugepaged.c @@ -900,6 +900,7 @@ static bool __collapse_huge_page_swapin(struct mm_struct *mm, .flags = FAULT_FLAG_ALLOW_RETRY, .pmd = pmd, .pgoff = linear_page_index(vma, address), + .range = mm_coarse_lock_range(), }; /* we only decide to swapin, if there is enough young ptes */ diff --git mm/memory.c mm/memory.c index 45b42fa02a2e..6cb3359f0857 100644 --- mm/memory.c +++ mm/memory.c @@ -4047,6 +4047,7 @@ static vm_fault_t __handle_mm_fault(struct vm_area_struct *vma, .flags = flags, .pgoff = linear_page_index(vma, address), .gfp_mask = __get_fault_gfp_mask(vma), + .range = mm_coarse_lock_range(), }; unsigned int dirty = flags & FAULT_FLAG_WRITE; struct mm_struct *mm = vma->vm_mm;