From patchwork Wed Jun 24 08:02:47 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Chris Wilson X-Patchwork-Id: 11622567 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id A6952912 for ; Wed, 24 Jun 2020 08:03:34 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 7CCEF20FC3 for ; Wed, 24 Jun 2020 08:03:34 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 7CCEF20FC3 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=chris-wilson.co.uk Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id B80876B0007; Wed, 24 Jun 2020 04:03:32 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id B06B36B0008; Wed, 24 Jun 2020 04:03:32 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 9CDD76B000A; Wed, 24 Jun 2020 04:03:32 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0111.hostedemail.com [216.40.44.111]) by kanga.kvack.org (Postfix) with ESMTP id 82C366B0007 for ; Wed, 24 Jun 2020 04:03:32 -0400 (EDT) Received: from smtpin22.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay05.hostedemail.com (Postfix) with ESMTP id 429B0181AC9CB for ; Wed, 24 Jun 2020 08:03:32 +0000 (UTC) X-FDA: 76963365864.22.tray79_571774226e42 Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin22.hostedemail.com (Postfix) with ESMTP id 1E8B518038E67 for ; Wed, 24 Jun 2020 08:03:32 +0000 (UTC) X-Spam-Summary: 1,0,0,99d3c3b593702aa8,d41d8cd98f00b204,chris@chris-wilson.co.uk,,RULES_HIT:41:355:379:541:800:960:966:973:982:988:989:1260:1311:1314:1345:1431:1437:1515:1534:1543:1711:1730:1747:1777:1792:1801:2196:2199:2380:2393:2553:2559:2562:2892:3138:3139:3140:3141:3142:3353:3865:3866:3867:3868:3870:3871:3872:3874:4321:4385:4605:5007:6261:8660:8957:10004:11026:11473:11658:11914:12043:12296:12297:12438:12555:12895:12986:13148:13230:13894:14181:14394:14571:14721:21080:21324:21433:21451:21627:21796:21939:21987:21990:30003:30036:30054:30075:30090,0,RBL:109.228.58.192:@chris-wilson.co.uk:.lbl8.mailshell.net-64.201.201.201 62.8.15.100;04yrbipr1dczcgbj8mkqcd6765fgdyp9smfd3nougqczajwjyqyz5ku73mqkux8.mr6ptugf88z3azpjyei841coowk4nki4x87ykpfwby6y4c3akejyyk988bxha6m.n-lbl8.mailshell.net-223.238.255.100,CacheIP:none,Bayesian:0.5,0.5,0.5,Netcheck:none,DomainCache:0,MSF:not bulk,SPF:fn,MSBL:0,DNSBL:neutral,Custom_rules:0:0:0,LFtime:25,LUA_SUMMARY:none X-HE-Tag: tray79_571774226e42 X-Filterd-Recvd-Size: 4552 Received: from fireflyinternet.com (mail.fireflyinternet.com [109.228.58.192]) by imf33.hostedemail.com (Postfix) with ESMTP for ; Wed, 24 Jun 2020 08:03:31 +0000 (UTC) X-Default-Received-SPF: pass (skip=forwardok (res=PASS)) x-ip-name=78.156.65.138; Received: from build.alporthouse.com (unverified [78.156.65.138]) by fireflyinternet.com (Firefly Internet (M1)) with ESMTP id 21599198-1500050 for multiple; Wed, 24 Jun 2020 09:02:53 +0100 From: Chris Wilson To: linux-mm@kvack.org Cc: linux-kernel@vger.kernel.org, intel-gfx@lists.freedesktop.org, Chris Wilson , Andrew Morton , Jason Gunthorpe Subject: [PATCH 1/2] mm/mmu_notifier: Mark up direct reclaim paths with MAYFAIL Date: Wed, 24 Jun 2020 09:02:47 +0100 Message-Id: <20200624080248.3701-1-chris@chris-wilson.co.uk> X-Mailer: git-send-email 2.20.1 MIME-Version: 1.0 X-Rspamd-Queue-Id: 1E8B518038E67 X-Spamd-Result: default: False [0.00 / 100.00] X-Rspamd-Server: rspam03 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: When direct reclaim enters the shrinker and tries to reclaim pages, it has to opportunitically unmap them [try_to_unmap_one]. For direct reclaim, the calling context is unknown and may include attempts to unmap one page of a dma object while attempting to allocate more pages for that object. Pass the information along that we are inside an opportunistic unmap that can allow that page to remain referenced and mapped, and let the callback opt in to avoiding a recursive wait. Signed-off-by: Chris Wilson Cc: Andrew Morton CC: Jason Gunthorpe --- include/linux/mmu_notifier.h | 15 ++++++++++++++- mm/mmu_notifier.c | 3 +++ mm/rmap.c | 5 +++-- 3 files changed, 20 insertions(+), 3 deletions(-) diff --git a/include/linux/mmu_notifier.h b/include/linux/mmu_notifier.h index fc68f3570e19..ee1ad008951c 100644 --- a/include/linux/mmu_notifier.h +++ b/include/linux/mmu_notifier.h @@ -48,7 +48,8 @@ enum mmu_notifier_event { MMU_NOTIFY_RELEASE, }; -#define MMU_NOTIFIER_RANGE_BLOCKABLE (1 << 0) +#define MMU_NOTIFIER_RANGE_BLOCKABLE BIT(0) +#define MMU_NOTIFIER_RANGE_MAYFAIL BIT(1) struct mmu_notifier_ops { /* @@ -169,6 +170,12 @@ struct mmu_notifier_ops { * a non-blocking behavior then the same applies to * invalidate_range_end. * + * If mayfail is set then the callback may return -EAGAIN while still + * holding its page references. This flag is set inside direct + * reclaim paths that are opportunistically trying to unmap pages + * from unknown contexts. The callback must be prepared to handle + * the matching invalidate_range_end even after failing the + * invalidate_range_start. */ int (*invalidate_range_start)(struct mmu_notifier *subscription, const struct mmu_notifier_range *range); @@ -397,6 +404,12 @@ mmu_notifier_range_blockable(const struct mmu_notifier_range *range) return (range->flags & MMU_NOTIFIER_RANGE_BLOCKABLE); } +static inline bool +mmu_notifier_range_mayfail(const struct mmu_notifier_range *range) +{ + return (range->flags & MMU_NOTIFIER_RANGE_MAYFAIL); +} + static inline void mmu_notifier_release(struct mm_struct *mm) { if (mm_has_notifiers(mm)) diff --git a/mm/mmu_notifier.c b/mm/mmu_notifier.c index 352bb9f3ecc0..95b89cee7af4 100644 --- a/mm/mmu_notifier.c +++ b/mm/mmu_notifier.c @@ -493,6 +493,9 @@ static int mn_hlist_invalidate_range_start( _ret = ops->invalidate_range_start(subscription, range); if (!mmu_notifier_range_blockable(range)) non_block_end(); + if (_ret == -EAGAIN && + mmu_notifier_range_mayfail(range)) + _ret = 0; if (_ret) { pr_info("%pS callback failed with %d in %sblockable context.\n", ops->invalidate_range_start, _ret, diff --git a/mm/rmap.c b/mm/rmap.c index 5fe2dedce1fc..912b737a3353 100644 --- a/mm/rmap.c +++ b/mm/rmap.c @@ -1406,8 +1406,9 @@ static bool try_to_unmap_one(struct page *page, struct vm_area_struct *vma, * Note that the page can not be free in this function as call of * try_to_unmap() must hold a reference on the page. */ - mmu_notifier_range_init(&range, MMU_NOTIFY_CLEAR, 0, vma, vma->vm_mm, - address, + mmu_notifier_range_init(&range, + MMU_NOTIFY_CLEAR, MMU_NOTIFIER_RANGE_MAYFAIL, + vma, vma->vm_mm, address, min(vma->vm_end, address + page_size(page))); if (PageHuge(page)) { /*