From patchwork Sun Nov 3 11:21:13 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Hillf Danton X-Patchwork-Id: 11224319 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 727CD1390 for ; Sun, 3 Nov 2019 11:21:28 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 3D2EA2080F for ; Sun, 3 Nov 2019 11:21:28 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 3D2EA2080F Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=sina.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 587AE6B0003; Sun, 3 Nov 2019 06:21:27 -0500 (EST) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 538BE6B0006; Sun, 3 Nov 2019 06:21:27 -0500 (EST) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 450196B0007; Sun, 3 Nov 2019 06:21:27 -0500 (EST) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0105.hostedemail.com [216.40.44.105]) by kanga.kvack.org (Postfix) with ESMTP id 2C52E6B0003 for ; Sun, 3 Nov 2019 06:21:27 -0500 (EST) Received: from smtpin12.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay05.hostedemail.com (Postfix) with SMTP id C8F8F181AC9CB for ; Sun, 3 Nov 2019 11:21:26 +0000 (UTC) X-FDA: 76114725372.12.boat71_c6bed145251a X-Spam-Summary: 2,0,0,efdb1505c4939bbc,d41d8cd98f00b204,hdanton@sina.com,::akpm@linux-foundation.org:linux-kernel@vger.kernel.org:vbabka@suse.cz:jack@suse.cz:mgorman@suse.de:jglisse@redhat.com:dan.j.williams@intel.com:ira.weiny@intel.com:jhubbard@nvidia.com:hch@lst.de:corbet@lwn.net:hdanton@sina.com,RULES_HIT:41:355:379:800:960:967:973:988:989:1260:1311:1314:1345:1437:1515:1534:1543:1605:1711:1730:1747:1777:1792:2194:2198:2199:2200:2393:2525:2559:2563:2682:2685:2693:2740:2859:2897:2898:2933:2937:2939:2942:2945:2947:2951:2954:3022:3138:3139:3140:3141:3142:3865:3866:3867:3868:3870:3871:3872:3874:3934:3936:3938:3941:3944:3947:3950:3953:3956:3959:4321:4605:5007:6261:6742:7875:7903:8603:8957:8985:9025:9108:10004:10226:11026:11334:11537:11658:11854:11914:12043:12114:12296:12297:12438:12555:12986:13161:13229:13848:13894:14096:14181:14721:21080:21324:21433:21611:21627:30054:30070:30090,0,RBL:202.108.3.166:@sina.com:.lbl8.mailshell.net-62.50.2.100 64.100.201.100,CacheIP:none,Bayesian: 0.5,0.5, X-HE-Tag: boat71_c6bed145251a X-Filterd-Recvd-Size: 4637 Received: from mail3-166.sinamail.sina.com.cn (mail3-166.sinamail.sina.com.cn [202.108.3.166]) by imf12.hostedemail.com (Postfix) with SMTP for ; Sun, 3 Nov 2019 11:21:24 +0000 (UTC) Received: from unknown (HELO localhost.localdomain)([221.219.0.223]) by sina.com with ESMTP id 5DBEB83000010B90; Sun, 3 Nov 2019 19:21:22 +0800 (CST) X-Sender: hdanton@sina.com X-Auth-ID: hdanton@sina.com X-SMAIL-MID: 49426554921579 From: Hillf Danton To: linux-mm Cc: Andrew Morton , linux-kernel , Vlastimil Babka , Jan Kara , Mel Gorman , Jerome Glisse , Dan Williams , Ira Weiny , John Hubbard , Christoph Hellwig , Jonathan Corbet , Hillf Danton Subject: [RFC] mm: gup: add helper page_try_gup_pin(page) Date: Sun, 3 Nov 2019 19:21:13 +0800 Message-Id: <20191103112113.8256-1-hdanton@sina.com> MIME-Version: 1.0 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: A helper is added for mitigating the gup issue described at https://lwn.net/Articles/784574/. It is unsafe to write out a dirty page that is already gup pinned for DMA. In the current writeback context, dirty pages are written out with no detecting whether they have been gup pinned; Nor mark to keep gupers off. In the gup context, file pages can be pinned with other gupers and writeback ignored. The factor, that no room, supposedly even one bit, in the current page struct can be used for tracking gupers, makes the issue harder to tackle. The approach here is, because it makes no sense to allow a file page to have multiple gupers at the same time, looking to make gupers mutually exclusive, and then guper's singulairty helps to tell if a guper is existing by staring at the change in page count. The result of that sigularity is not yet 100% correct but something of "best effort" as the effect of random get_page() is perhaps also folded in it. It is assumed the best effort is feasible/acceptable in practice without the the cost of growing the page struct size by one bit, were it true that something similar has been applied to the page migrate and reclaim contexts for a while. With the helper in place, we skip writing out a dirty page if a guper is detected; On gupping, we give up pinning a file page due to writeback or losing the race to become a guper. The end result is, no gup-pinned page will be put under writeback. It is based on next-20191031. Signed-off-by: Hillf Danton --- -- --- a/include/linux/mm.h +++ b/include/linux/mm.h @@ -1055,6 +1055,29 @@ static inline void put_page(struct page __put_page(page); } +/* + * @page must be pagecache page + */ +static inline bool page_try_gup_pin(struct page *page) +{ + int count; + + page = compound_head(page); + count = page_ref_count(page); + smp_mb__after_atomic(); + + if (!count || count > page_mapcount(page) + 1 + + page_has_private(page)) + return false; + + if (page_ref_inc_return(page) == count + 1) { + smp_mb__after_atomic(); + return true; + } + put_page(page); + return false; +} + /** * put_user_page() - release a gup-pinned page * @page: pointer to page to be released --- a/mm/gup.c +++ b/mm/gup.c @@ -253,7 +253,11 @@ retry: } if (flags & FOLL_GET) { - if (unlikely(!try_get_page(page))) { + if (page_is_file_cache(page)) { + if (PageWriteback(page) || !page_try_gup_pin(page)) + goto pin_fail; + } else if (unlikely(!try_get_page(page))) { +pin_fail: page = ERR_PTR(-ENOMEM); goto out; } --- a/mm/page-writeback.c +++ b/mm/page-writeback.c @@ -2202,6 +2202,9 @@ int write_cache_pages(struct address_spa done_index = page->index; + if (!page_try_gup_pin(page)) + continue; + lock_page(page); /* @@ -2215,6 +2218,7 @@ int write_cache_pages(struct address_spa if (unlikely(page->mapping != mapping)) { continue_unlock: unlock_page(page); + put_page(page); continue; } @@ -2236,6 +2240,11 @@ continue_unlock: trace_wbc_writepage(wbc, inode_to_bdi(mapping->host)); error = (*writepage)(page, wbc, data); + /* + * protection of gup pin is no longer needed after + * putting page under writeback + */ + put_page(page); if (unlikely(error)) { /* * Handle errors according to the type of