From patchwork Wed Sep 25 00:52:08 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 11159857 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 9E106912 for ; Wed, 25 Sep 2019 00:53:11 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id 7A85B2168B for ; Wed, 25 Sep 2019 00:53:11 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (2048-bit key) header.d=infradead.org header.i=@infradead.org header.b="uN4G2Y5e" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S2411299AbfIYAwc (ORCPT ); Tue, 24 Sep 2019 20:52:32 -0400 Received: from bombadil.infradead.org ([198.137.202.133]:56940 "EHLO bombadil.infradead.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S2411289AbfIYAwa (ORCPT ); Tue, 24 Sep 2019 20:52:30 -0400 DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=bombadil.20170209; h=Content-Transfer-Encoding: MIME-Version:References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender :Reply-To:Content-Type:Content-ID:Content-Description:Resent-Date:Resent-From :Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Id:List-Help: List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive; bh=4eLFInkalxdv/sw4VoZ/JGQOQopPh05FfkwoHlyPkHg=; b=uN4G2Y5e/IFFyzI5vzkyq9bD5O oQuuzIZbJlYUPt/CXX1xsBPVnzUFCpKS++qcwiDKgM9+XQAjgrXprTYPPhsb+X0dAl6aqRG+fci/n Zh8JLGwV03UI7QCiMFQeNxQXofuXQ27mkpDxQaiK56gSGsW5nX8qzi/IUQzn514r/ovUZinSwO6E2 01Dg2mcF8Y2mmyLqTQdqmvyTuOYF9oi3Goy2NkaRWM9YDw/WLEv2uw2b9IuiaGkIsR1vC+24Jn4xG sayzdbjPq++cLENak4akLGHWWGPyNTWZkotZSDxzBw1iW/gwICJYxHd478jR0XOx06UcKbznGmmw9 sTGPtDHg==; Received: from willy by bombadil.infradead.org with local (Exim 4.92.2 #3 (Red Hat Linux)) id 1iCvXV-00076n-JL; Wed, 25 Sep 2019 00:52:17 +0000 From: Matthew Wilcox To: linux-fsdevel@vger.kernel.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org Cc: "Matthew Wilcox (Oracle)" Subject: [PATCH 09/15] mm: Allow large pages to be added to the page cache Date: Tue, 24 Sep 2019 17:52:08 -0700 Message-Id: <20190925005214.27240-10-willy@infradead.org> X-Mailer: git-send-email 2.21.0 In-Reply-To: <20190925005214.27240-1-willy@infradead.org> References: <20190925005214.27240-1-willy@infradead.org> MIME-Version: 1.0 Sender: linux-fsdevel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-fsdevel@vger.kernel.org From: "Matthew Wilcox (Oracle)" We return -EEXIST if there are any non-shadow entries in the page cache in the range covered by the large page. If there are multiple shadow entries in the range, we set *shadowp to one of them (currently the one at the highest index). If that turns out to be the wrong answer, we can implement something more complex. This is mostly modelled after the equivalent function in the shmem code. Signed-off-by: Matthew Wilcox (Oracle) --- mm/filemap.c | 37 ++++++++++++++++++++++++++----------- 1 file changed, 26 insertions(+), 11 deletions(-) diff --git a/mm/filemap.c b/mm/filemap.c index bab97addbb1d..afe8f5d95810 100644 --- a/mm/filemap.c +++ b/mm/filemap.c @@ -855,6 +855,7 @@ static int __add_to_page_cache_locked(struct page *page, int huge = PageHuge(page); struct mem_cgroup *memcg; int error; + unsigned int nr = 1; void *old; VM_BUG_ON_PAGE(!PageLocked(page), page); @@ -866,31 +867,45 @@ static int __add_to_page_cache_locked(struct page *page, gfp_mask, &memcg, false); if (error) return error; + xas_set_order(&xas, offset, compound_order(page)); + nr = compound_nr(page); } - get_page(page); + page_ref_add(page, nr); page->mapping = mapping; page->index = offset; do { + unsigned long exceptional = 0; + unsigned int i = 0; + xas_lock_irq(&xas); - old = xas_load(&xas); - if (old && !xa_is_value(old)) + xas_for_each_conflict(&xas, old) { + if (!xa_is_value(old)) + break; + exceptional++; + if (shadowp) + *shadowp = old; + } + if (old) xas_set_err(&xas, -EEXIST); - xas_store(&xas, page); + xas_create_range(&xas); if (xas_error(&xas)) goto unlock; - if (xa_is_value(old)) { - mapping->nrexceptional--; - if (shadowp) - *shadowp = old; +next: + xas_store(&xas, page); + if (++i < nr) { + xas_next(&xas); + goto next; } - mapping->nrpages++; + mapping->nrexceptional -= exceptional; + mapping->nrpages += nr; /* hugetlb pages do not participate in page cache accounting */ if (!huge) - __inc_node_page_state(page, NR_FILE_PAGES); + __mod_node_page_state(page_pgdat(page), NR_FILE_PAGES, + nr); unlock: xas_unlock_irq(&xas); } while (xas_nomem(&xas, gfp_mask & GFP_RECLAIM_MASK)); @@ -907,7 +922,7 @@ static int __add_to_page_cache_locked(struct page *page, /* Leave page->index set: truncation relies upon it */ if (!huge) mem_cgroup_cancel_charge(page, memcg, false); - put_page(page); + page_ref_sub(page, nr); return xas_error(&xas); } ALLOW_ERROR_INJECTION(__add_to_page_cache_locked, ERRNO);