From patchwork Tue May 26 19:51:15 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jens Axboe X-Patchwork-Id: 11571149 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id AEDF5159A for ; Tue, 26 May 2020 19:51:37 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 7C22E2088E for ; Tue, 26 May 2020 19:51:37 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (2048-bit key) header.d=kernel-dk.20150623.gappssmtp.com header.i=@kernel-dk.20150623.gappssmtp.com header.b="iE3qXf/O" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 7C22E2088E Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=kernel.dk Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id C4872800BA; Tue, 26 May 2020 15:51:33 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id B0F6880010; Tue, 26 May 2020 15:51:33 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 8EA71800BA; Tue, 26 May 2020 15:51:33 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0050.hostedemail.com [216.40.44.50]) by kanga.kvack.org (Postfix) with ESMTP id 6BACF80010 for ; Tue, 26 May 2020 15:51:33 -0400 (EDT) Received: from smtpin08.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay03.hostedemail.com (Postfix) with ESMTP id 2500F8248076 for ; Tue, 26 May 2020 19:51:33 +0000 (UTC) X-FDA: 76859914866.08.men15_4a124c826d4c X-Spam-Summary: 2,0,0,5ee7e80125f4aafc,d41d8cd98f00b204,axboe@kernel.dk,,RULES_HIT:41:355:379:541:800:960:968:973:988:989:1260:1311:1314:1345:1359:1431:1437:1515:1535:1544:1605:1711:1730:1747:1777:1792:2194:2199:2393:2559:2562:3138:3139:3140:3141:3142:3865:3867:3868:3870:3871:3872:3874:4118:4250:4321:4605:5007:6261:6653:7875:8957:10004:11026:11473:11658:11914:12043:12291:12296:12297:12438:12517:12519:12555:12683:12895:13138:13231:13894:14096:14181:14394:14721:21067:21080:21324:21444:21451:21627:21796:21990:30036:30054,0,RBL:209.85.216.67:@kernel.dk:.lbl8.mailshell.net-66.100.201.201 62.2.0.100,CacheIP:none,Bayesian:0.5,0.5,0.5,Netcheck:none,DomainCache:0,MSF:not bulk,SPF:fp,MSBL:0,DNSBL:neutral,Custom_rules:0:0:0,LFtime:26,LUA_SUMMARY:none X-HE-Tag: men15_4a124c826d4c X-Filterd-Recvd-Size: 7259 Received: from mail-pj1-f67.google.com (mail-pj1-f67.google.com [209.85.216.67]) by imf01.hostedemail.com (Postfix) with ESMTP for ; Tue, 26 May 2020 19:51:32 +0000 (UTC) Received: by mail-pj1-f67.google.com with SMTP id 5so266415pjd.0 for ; Tue, 26 May 2020 12:51:32 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=kernel-dk.20150623.gappssmtp.com; s=20150623; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=91LPx4VFG9btmYMQhwNjB8l3CSYHzquXTBOwjvdo+tY=; b=iE3qXf/OK07yvdsW8Vb51ZnFfsRrhY/I8AyPsPZBuYDCG+TC8jz9R0D0M+GRoONqmI gim5npZm8cxTM+QSweOQXfZsODg5VS6+7bN+CKZ4dMY7gY+QpgkMzMrv59Fphw7RYayS JBGmAVIX3BEE46Smv2GYnsm+ImSrUvQiaACUJ58syDnlB+HNG5sK2LsdT4XG1F3dqp/4 oVhakRyiZ0nYbu4FXaNA5/FJOFFKKsGpp/Hce7YkE5gJePG3prwbUWapZwHq23NDXkHD /0mpwLPbH1DZDuHuFhnGHadUwVq75hkc8cYWz+Lkln7BqwK/1OvfBTdLVjGY87S3aO6Z QMtg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=91LPx4VFG9btmYMQhwNjB8l3CSYHzquXTBOwjvdo+tY=; b=ERAkGjiIkRmMIieDJc+vSvlPez/1rpKwB8DI2GzszbUAGFhqUvVWXkBIQhE3v769QP 3fqHVFXlvYBB0i4khKP4aJx4g3j6v1qu+AUL24C6WEsmLaitQSLDkX9wb0fPcGIfHtTA zqk+3COaqqE1wkjUpd4W24XWXSzDey5lZSZl+E7g8Y4DFtrsqu3vzC5yUXc5JO8TQDfn 3oNU2Jvphihge+ijNM3dgXVWQ6g7VPWN9RYQ8WPZRUTnnYGTW++Akq8M62UvZRDvD6La ewbbLX0yF6+xUe+UuCS6aWF7oikzDqy1HEffomH8ulydfImxtbVaGkleG7qnFE4rwdQE tE8Q== X-Gm-Message-State: AOAM530BQjXPE6d9/i7u2AH4BR8hkvO0+Smkkc2iKKanxUSrFL4ROJhD Ah461Yj/uWD1pQAFtx2Hgjs17Q== X-Google-Smtp-Source: ABdhPJzk5ZeLZ9PiNns9o7WtuSDeeuU8oJZ8JIZZ0HjXadMCEeQTkM4QJqeJe1UnHKeyrNQoGn1fyg== X-Received: by 2002:a17:90a:2a0d:: with SMTP id i13mr909880pjd.94.1590522691575; Tue, 26 May 2020 12:51:31 -0700 (PDT) Received: from x1.lan ([2605:e000:100e:8c61:94bb:59d2:caf6:70e1]) by smtp.gmail.com with ESMTPSA id c184sm313943pfc.57.2020.05.26.12.51.30 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 26 May 2020 12:51:31 -0700 (PDT) From: Jens Axboe To: io-uring@vger.kernel.org Cc: linux-fsdevel@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, akpm@linux-foundation.org, Jens Axboe Subject: [PATCH 04/12] mm: add support for async page locking Date: Tue, 26 May 2020 13:51:15 -0600 Message-Id: <20200526195123.29053-5-axboe@kernel.dk> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20200526195123.29053-1-axboe@kernel.dk> References: <20200526195123.29053-1-axboe@kernel.dk> MIME-Version: 1.0 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Normally waiting for a page to become unlocked, or locking the page, requires waiting for IO to complete. Add support for lock_page_async() and wait_on_page_locked_async(), which are callback based instead. This allows a caller to get notified when a page becomes unlocked, rather than wait for it. We add a new iocb field, ki_waitq, to pass in the necessary data for this to happen. We can unionize this with ki_cookie, since that is only used for polled IO. Polled IO can never co-exist with async callbacks, as it is (by definition) polled completions. struct wait_page_key is made public, and we define struct wait_page_async as the interface between the caller and the core. Signed-off-by: Jens Axboe Acked-by: Johannes Weiner --- include/linux/fs.h | 7 ++++++- include/linux/pagemap.h | 9 +++++++++ mm/filemap.c | 41 +++++++++++++++++++++++++++++++++++++++++ 3 files changed, 56 insertions(+), 1 deletion(-) diff --git a/include/linux/fs.h b/include/linux/fs.h index d3ebb49189df..ba1fff0e7bca 100644 --- a/include/linux/fs.h +++ b/include/linux/fs.h @@ -314,6 +314,8 @@ enum rw_hint { #define IOCB_SYNC (1 << 5) #define IOCB_WRITE (1 << 6) #define IOCB_NOWAIT (1 << 7) +/* iocb->ki_waitq is valid */ +#define IOCB_WAITQ (1 << 8) struct kiocb { struct file *ki_filp; @@ -327,7 +329,10 @@ struct kiocb { int ki_flags; u16 ki_hint; u16 ki_ioprio; /* See linux/ioprio.h */ - unsigned int ki_cookie; /* for ->iopoll */ + union { + unsigned int ki_cookie; /* for ->iopoll */ + struct wait_page_queue *ki_waitq; /* for async buffered IO */ + }; randomized_struct_fields_end }; diff --git a/include/linux/pagemap.h b/include/linux/pagemap.h index 53d980f2208d..d3e63c9c61ae 100644 --- a/include/linux/pagemap.h +++ b/include/linux/pagemap.h @@ -495,6 +495,7 @@ static inline int wake_page_match(struct wait_page_queue *wait_page, extern void __lock_page(struct page *page); extern int __lock_page_killable(struct page *page); +extern int __lock_page_async(struct page *page, struct wait_page_queue *wait); extern int __lock_page_or_retry(struct page *page, struct mm_struct *mm, unsigned int flags); extern void unlock_page(struct page *page); @@ -531,6 +532,14 @@ static inline int lock_page_killable(struct page *page) return 0; } +static inline int lock_page_async(struct page *page, + struct wait_page_queue *wait) +{ + if (!trylock_page(page)) + return __lock_page_async(page, wait); + return 0; +} + /* * lock_page_or_retry - Lock the page, unless this would block and the * caller indicated that it can handle a retry. diff --git a/mm/filemap.c b/mm/filemap.c index e891b5bee8fd..c746541b1d49 100644 --- a/mm/filemap.c +++ b/mm/filemap.c @@ -1183,6 +1183,42 @@ int wait_on_page_bit_killable(struct page *page, int bit_nr) } EXPORT_SYMBOL(wait_on_page_bit_killable); +static int __wait_on_page_locked_async(struct page *page, + struct wait_page_queue *wait, bool set) +{ + struct wait_queue_head *q = page_waitqueue(page); + int ret = 0; + + wait->page = page; + wait->bit_nr = PG_locked; + + spin_lock_irq(&q->lock); + if (set) + ret = !trylock_page(page); + else + ret = PageLocked(page); + if (ret) { + __add_wait_queue_entry_tail(q, &wait->wait); + SetPageWaiters(page); + if (set) + ret = !trylock_page(page); + else + ret = PageLocked(page); + /* + * If we were succesful now, we know we're still on the + * waitqueue as we're still under the lock. This means it's + * safe to remove and return success, we know the callback + * isn't going to trigger. + */ + if (!ret) + __remove_wait_queue(q, &wait->wait); + else + ret = -EIOCBQUEUED; + } + spin_unlock_irq(&q->lock); + return ret; +} + /** * put_and_wait_on_page_locked - Drop a reference and wait for it to be unlocked * @page: The page to wait for. @@ -1345,6 +1381,11 @@ int __lock_page_killable(struct page *__page) } EXPORT_SYMBOL_GPL(__lock_page_killable); +int __lock_page_async(struct page *page, struct wait_page_queue *wait) +{ + return __wait_on_page_locked_async(page, wait, true); +} + /* * Return values: * 1 - page is locked; mmap_sem is still held.