From patchwork Mon Oct 19 18:59:10 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Kent Overstreet X-Patchwork-Id: 11844945 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id A79F517C9 for ; Mon, 19 Oct 2020 18:59:28 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 7EBC8223C6 for ; Mon, 19 Oct 2020 18:59:28 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="J2k3/AFX" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1730809AbgJSS7U (ORCPT ); Mon, 19 Oct 2020 14:59:20 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:55866 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727681AbgJSS7U (ORCPT ); Mon, 19 Oct 2020 14:59:20 -0400 Received: from mail-io1-xd44.google.com (mail-io1-xd44.google.com [IPv6:2607:f8b0:4864:20::d44]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 1DA90C0613CE; Mon, 19 Oct 2020 11:59:20 -0700 (PDT) Received: by mail-io1-xd44.google.com with SMTP id k25so963482ioh.7; Mon, 19 Oct 2020 11:59:20 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:mime-version :content-transfer-encoding; bh=M5x9XsX6bugHbxc6Lf+avQGPGc1m0dcjvoBpjbAMIo0=; b=J2k3/AFXZl88bLHwbndRa5qLGvTl9eV5G+onfb/NjzBxv+GhMoCIcKm+CTtTYIlpu2 TQp019+opsp2xWLV0Z3JEq7VDL+nP2loWSf/2Y3ONeKyQRiQ+sZHlNTwhg+kX3w3rR9+ ftO3qDRlxFFYgiyyMW+iJy3ks5Ihn8M4xN6/z9uEEKsoJbJU+odf8xD4ygyVEL93uxFY QTxMXyuW1x/GpegzVZXV65szFdtlpXZXqzdbAUl/PjzIQz8F7VS/V011AgvHPHFGlOXA 11EhfSTFH4WKm8/kEYeRmIfwYsCAoHEhuw1Moy45GWXU5+1mHf/zHTuTXDfQhuqnfEFX Hjdw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:mime-version :content-transfer-encoding; bh=M5x9XsX6bugHbxc6Lf+avQGPGc1m0dcjvoBpjbAMIo0=; b=DtTk71B4HRCrYHtkN9MiyuEmBZN6b/UaXLUhlEJem0U0ZtLkzf3lTJonxEYUPnzSs+ gx6cneTffSQX7B+SWUzeGzvagUky7guFxeqS9unalOGOmO+G8/u/nYHShwiH9cKU7ImY BCQL3RhgOH57qO+EY1DYczpke2SnViKJhDFmJCfjwKqKv2B68tmz9QmxkSRdICr50/1C QwiAAyWthNiDgMSyQkNCY5/LBNs29qLAMdk+Y1GBhQ0pbylEPNjz1AmcTcv3qzCtO+Cx 92XXXenbyuKs6q0lp0D7nCq+kwno4d+sHMvemBFPItMgEreLDYDQ+eqIyTXpOvvSXUDO /INA== X-Gm-Message-State: AOAM533zb6tb4OcmFGNFNtgIwQpv6pu+jq95H8+BVD+lCNGT/sjODDSV wGD5W1hOliAFOVBm8AZFB726BhIIcQ== X-Google-Smtp-Source: ABdhPJyjwsYgkDUBJ3+EJ0I645+jH5uXhu+jpv8syVAUp/5dSz7B8F1W145DwBNmCjCI453wHONG0g== X-Received: by 2002:a05:6602:1306:: with SMTP id h6mr738846iov.160.1603133959141; Mon, 19 Oct 2020 11:59:19 -0700 (PDT) Received: from moria.home.lan ([2601:19b:c500:a1:7285:c2ff:fed5:c918]) by smtp.gmail.com with ESMTPSA id t12sm538770ilh.18.2020.10.19.11.59.17 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 19 Oct 2020 11:59:18 -0700 (PDT) From: Kent Overstreet To: linux-kernel@vger.kernel.org, linux-fsdevel@vger.kernel.org, akpm@linux-foundation.org, sfrench@samba.org, linux-cifs@vger.kernel.org Cc: Kent Overstreet Subject: [PATCH 1/2] cifs: convert to add_to_page_cache() Date: Mon, 19 Oct 2020 14:59:10 -0400 Message-Id: <20201019185911.2909471-1-kent.overstreet@gmail.com> X-Mailer: git-send-email 2.28.0 MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-fsdevel@vger.kernel.org This is just open coding add_to_page_cache(), and the next patch will delete add_to_page_cache_locked(). Signed-off-by: Kent Overstreet --- fs/cifs/file.c | 21 +++++---------------- 1 file changed, 5 insertions(+), 16 deletions(-) diff --git a/fs/cifs/file.c b/fs/cifs/file.c index be46fab4c9..a17a21181e 100644 --- a/fs/cifs/file.c +++ b/fs/cifs/file.c @@ -4296,20 +4296,12 @@ readpages_get_pages(struct address_space *mapping, struct list_head *page_list, page = lru_to_page(page_list); - /* - * Lock the page and put it in the cache. Since no one else - * should have access to this page, we're safe to simply set - * PG_locked without checking it first. - */ - __SetPageLocked(page); - rc = add_to_page_cache_locked(page, mapping, - page->index, gfp); + rc = add_to_page_cache(page, mapping, + page->index, gfp); /* give up if we can't stick it in the cache */ - if (rc) { - __ClearPageLocked(page); + if (rc) return rc; - } /* move first page to the tmplist */ *offset = (loff_t)page->index << PAGE_SHIFT; @@ -4328,12 +4320,9 @@ readpages_get_pages(struct address_space *mapping, struct list_head *page_list, if (*bytes + PAGE_SIZE > rsize) break; - __SetPageLocked(page); - rc = add_to_page_cache_locked(page, mapping, page->index, gfp); - if (rc) { - __ClearPageLocked(page); + rc = add_to_page_cache(page, mapping, page->index, gfp); + if (rc) break; - } list_move_tail(&page->lru, tmplist); (*bytes) += PAGE_SIZE; expected_index++; From patchwork Mon Oct 19 18:59:11 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Kent Overstreet X-Patchwork-Id: 11844943 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 3855016BC for ; Mon, 19 Oct 2020 18:59:24 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 036D3223BF for ; Mon, 19 Oct 2020 18:59:23 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="hQ6vOzRm" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1730945AbgJSS7W (ORCPT ); Mon, 19 Oct 2020 14:59:22 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:55870 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727681AbgJSS7V (ORCPT ); Mon, 19 Oct 2020 14:59:21 -0400 Received: from mail-io1-xd44.google.com (mail-io1-xd44.google.com [IPv6:2607:f8b0:4864:20::d44]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 09382C0613CE; Mon, 19 Oct 2020 11:59:21 -0700 (PDT) Received: by mail-io1-xd44.google.com with SMTP id q9so971625iow.6; Mon, 19 Oct 2020 11:59:21 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=RQFUXiA3CVc168Ax+F3zE3K0hjfpsoCD8RSBN8GOM0Q=; b=hQ6vOzRmW6IPAOwAcv/uAbA7blzdDR4KOZNDRoktVCexd0J63zWwEaduMgN0B+wxR8 YU02OJ8giVloxMK89QZQH0tXdzng9JQnozRuq107BW2boWX3eQ0IbDV/pvlnwNkdtnSF /j+bIyfT/OntmfdxODSWJzngUdwl0A1zgTd4J9w4HrRCQTDk5f69u7TtJrpRL7p2xRMB UG9vU3FOnKwDyus+0EBtrhAOEWVQPPv8CUlefJ8t8afjiCSqDNaJjZbj95ZW3U/NDuAO a/DuIlCJ+CRpGKzYnzbsgQymzm2gh56sZJv2TjKhgxrycdJjkLiGuI9bPvhZoqbELinT AqaA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=RQFUXiA3CVc168Ax+F3zE3K0hjfpsoCD8RSBN8GOM0Q=; b=bveqkjdOTklhIH9zMVlO7Z949+a0YrvXTgc5P8uQu6XTrUW3tspqZclON1F5RQEdY+ 9DW21VLkCc1mthSw3owLtulq5x6s4ORCkmJ9T175wgC6Y0oggC+2Pt9FQdDchYk2H0ZJ 1nZaj6KYohWpCKNUPRNBiNReQ06jjlCo2TLeOyumwb/dKLIvxUthz6RqFPLwKnEbdlOf mfXlV10kxWKBBSBjDkT5Mn0FY+fYg34jzqLtFbUxAztazkfz1ueG0/hP/PX9CAmXP58n 4tBjWlN/DRETxw1FIkTxKvidVBL40flaHh4z+QAbWeEdrCfjwxKiP5L//DL+VLEXEtYD +PxQ== X-Gm-Message-State: AOAM532DdlpL9krxnO0bnxRBJNj5JsPOoFLX9K7DZC9h4N8cGAiHu8Kp LAKl6YSfYjz37nkCmxyAhz+Rg9x4BQ== X-Google-Smtp-Source: ABdhPJy0CNgeIv1MTTtEls/84tAIiZtbJw+IzM6wUa+0oOtaiWVNqFXN3JKed0L00xvWJnKUl9eWyQ== X-Received: by 2002:a05:6602:2dce:: with SMTP id l14mr732539iow.198.1603133960261; Mon, 19 Oct 2020 11:59:20 -0700 (PDT) Received: from moria.home.lan ([2601:19b:c500:a1:7285:c2ff:fed5:c918]) by smtp.gmail.com with ESMTPSA id t12sm538770ilh.18.2020.10.19.11.59.19 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 19 Oct 2020 11:59:19 -0700 (PDT) From: Kent Overstreet To: linux-kernel@vger.kernel.org, linux-fsdevel@vger.kernel.org, akpm@linux-foundation.org, sfrench@samba.org, linux-cifs@vger.kernel.org Cc: Kent Overstreet Subject: [PATCH 2/2] fs: kill add_to_page_cache_locked() Date: Mon, 19 Oct 2020 14:59:11 -0400 Message-Id: <20201019185911.2909471-2-kent.overstreet@gmail.com> X-Mailer: git-send-email 2.28.0 In-Reply-To: <20201019185911.2909471-1-kent.overstreet@gmail.com> References: <20201019185911.2909471-1-kent.overstreet@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-fsdevel@vger.kernel.org No longer has any users, so remove it. Signed-off-by: Kent Overstreet Reported-by: kernel test robot --- include/linux/pagemap.h | 20 ++----------- mm/filemap.c | 62 ++++++++++++++++++++--------------------- 2 files changed, 32 insertions(+), 50 deletions(-) diff --git a/include/linux/pagemap.h b/include/linux/pagemap.h index 434c9c34ae..aceaebfaab 100644 --- a/include/linux/pagemap.h +++ b/include/linux/pagemap.h @@ -689,8 +689,8 @@ static inline int fault_in_pages_readable(const char __user *uaddr, int size) return 0; } -int add_to_page_cache_locked(struct page *page, struct address_space *mapping, - pgoff_t index, gfp_t gfp_mask); +int add_to_page_cache(struct page *page, struct address_space *mapping, + pgoff_t index, gfp_t gfp_mask); int add_to_page_cache_lru(struct page *page, struct address_space *mapping, pgoff_t index, gfp_t gfp_mask); extern void delete_from_page_cache(struct page *page); @@ -710,22 +710,6 @@ void page_cache_readahead_unbounded(struct address_space *, struct file *, pgoff_t index, unsigned long nr_to_read, unsigned long lookahead_count); -/* - * Like add_to_page_cache_locked, but used to add newly allocated pages: - * the page is new, so we can just run __SetPageLocked() against it. - */ -static inline int add_to_page_cache(struct page *page, - struct address_space *mapping, pgoff_t offset, gfp_t gfp_mask) -{ - int error; - - __SetPageLocked(page); - error = add_to_page_cache_locked(page, mapping, offset, gfp_mask); - if (unlikely(error)) - __ClearPageLocked(page); - return error; -} - /** * struct readahead_control - Describes a readahead request. * diff --git a/mm/filemap.c b/mm/filemap.c index 82e5e0ba24..c562ad7e05 100644 --- a/mm/filemap.c +++ b/mm/filemap.c @@ -827,10 +827,10 @@ int replace_page_cache_page(struct page *old, struct page *new, gfp_t gfp_mask) } EXPORT_SYMBOL_GPL(replace_page_cache_page); -static int __add_to_page_cache_locked(struct page *page, - struct address_space *mapping, - pgoff_t offset, gfp_t gfp_mask, - void **shadowp) +static int __add_to_page_cache(struct page *page, + struct address_space *mapping, + pgoff_t offset, gfp_t gfp_mask, + void **shadowp) { XA_STATE(xas, &mapping->i_pages, offset); int huge = PageHuge(page); @@ -841,6 +841,7 @@ static int __add_to_page_cache_locked(struct page *page, VM_BUG_ON_PAGE(PageSwapBacked(page), page); mapping_set_update(&xas, mapping); + __SetPageLocked(page); get_page(page); page->mapping = mapping; page->index = offset; @@ -885,29 +886,30 @@ static int __add_to_page_cache_locked(struct page *page, page->mapping = NULL; /* Leave page->index set: truncation relies upon it */ put_page(page); + __ClearPageLocked(page); return error; } -ALLOW_ERROR_INJECTION(__add_to_page_cache_locked, ERRNO); /** - * add_to_page_cache_locked - add a locked page to the pagecache + * add_to_page_cache - add a newly allocated page to the pagecache * @page: page to add * @mapping: the page's address_space * @offset: page index * @gfp_mask: page allocation mode * - * This function is used to add a page to the pagecache. It must be locked. - * This function does not add the page to the LRU. The caller must do that. + * This function is used to add a page to the pagecache. It must be newly + * allocated. This function does not add the page to the LRU. The caller must + * do that. * * Return: %0 on success, negative error code otherwise. */ -int add_to_page_cache_locked(struct page *page, struct address_space *mapping, - pgoff_t offset, gfp_t gfp_mask) +int add_to_page_cache(struct page *page, struct address_space *mapping, + pgoff_t offset, gfp_t gfp_mask) { - return __add_to_page_cache_locked(page, mapping, offset, - gfp_mask, NULL); + return __add_to_page_cache(page, mapping, offset, gfp_mask, NULL); } -EXPORT_SYMBOL(add_to_page_cache_locked); +EXPORT_SYMBOL(add_to_page_cache); +ALLOW_ERROR_INJECTION(add_to_page_cache, ERRNO); int add_to_page_cache_lru(struct page *page, struct address_space *mapping, pgoff_t offset, gfp_t gfp_mask) @@ -915,26 +917,22 @@ int add_to_page_cache_lru(struct page *page, struct address_space *mapping, void *shadow = NULL; int ret; - __SetPageLocked(page); - ret = __add_to_page_cache_locked(page, mapping, offset, - gfp_mask, &shadow); + ret = __add_to_page_cache(page, mapping, offset, gfp_mask, &shadow); if (unlikely(ret)) - __ClearPageLocked(page); - else { - /* - * The page might have been evicted from cache only - * recently, in which case it should be activated like - * any other repeatedly accessed page. - * The exception is pages getting rewritten; evicting other - * data from the working set, only to cache data that will - * get overwritten with something else, is a waste of memory. - */ - WARN_ON_ONCE(PageActive(page)); - if (!(gfp_mask & __GFP_WRITE) && shadow) - workingset_refault(page, shadow); - lru_cache_add(page); - } - return ret; + return ret; + + /* + * The page might have been evicted from cache only recently, in which + * case it should be activated like any other repeatedly accessed page. + * The exception is pages getting rewritten; evicting other data from + * the working set, only to cache data that will get overwritten with + * something else, is a waste of memory. + */ + WARN_ON_ONCE(PageActive(page)); + if (!(gfp_mask & __GFP_WRITE) && shadow) + workingset_refault(page, shadow); + lru_cache_add(page); + return 0; } EXPORT_SYMBOL_GPL(add_to_page_cache_lru);