From patchwork Thu Dec 5 16:22:30 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alexander Duyck X-Patchwork-Id: 11275161 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 3E3FD13A4 for ; Thu, 5 Dec 2019 16:22:36 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id F17522464E for ; Thu, 5 Dec 2019 16:22:35 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="Y5UBAwIq" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org F17522464E Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=gmail.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 21B6F6B1114; Thu, 5 Dec 2019 11:22:35 -0500 (EST) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 1CB936B1115; Thu, 5 Dec 2019 11:22:35 -0500 (EST) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 0BA856B1116; Thu, 5 Dec 2019 11:22:35 -0500 (EST) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0024.hostedemail.com [216.40.44.24]) by kanga.kvack.org (Postfix) with ESMTP id E9EFC6B1114 for ; Thu, 5 Dec 2019 11:22:34 -0500 (EST) Received: from smtpin29.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay03.hostedemail.com (Postfix) with SMTP id 8FCFC829B946 for ; Thu, 5 Dec 2019 16:22:34 +0000 (UTC) X-FDA: 76231605828.29.dirt09_8adb4b080804d X-Spam-Summary: 2,0,0,e293a8918b7cc99a,d41d8cd98f00b204,alexander.duyck@gmail.com,:kvm@vger.kernel.org:mst@redhat.com:linux-kernel@vger.kernel.org:willy@infradead.org:mhocko@kernel.org::akpm@linux-foundation.org:mgorman@techsingularity.net:vbabka@suse.cz:yang.zhang.wz@gmail.com:nitesh@redhat.com:konrad.wilk@oracle.com:david@redhat.com:pagupta@redhat.com:riel@surriel.com:lcapitulino@redhat.com:dave.hansen@intel.com:wei.w.wang@intel.com:aarcange@redhat.com:pbonzini@redhat.com:dan.j.williams@intel.com:alexander.h.duyck@linux.intel.com:osalvador@suse.de,RULES_HIT:41:152:355:379:960:966:973:988:989:1260:1277:1311:1313:1314:1345:1359:1431:1437:1515:1516:1518:1535:1543:1593:1594:1711:1730:1747:1777:1792:2196:2198:2199:2200:2393:2559:2562:2693:2731:2895:3138:3139:3140:3141:3142:3353:3865:3866:3867:3868:3870:3871:3872:3874:4117:4250:4321:4385:4605:5007:6119:6261:6653:6742:7576:7875:9010:9413:10004:10400:11026:11658:11914:12043:12048:12114:12291:12297:12438:12517:12519:12555:12679:1276 0:13149: X-HE-Tag: dirt09_8adb4b080804d X-Filterd-Recvd-Size: 6504 Received: from mail-qv1-f67.google.com (mail-qv1-f67.google.com [209.85.219.67]) by imf10.hostedemail.com (Postfix) with ESMTP for ; Thu, 5 Dec 2019 16:22:34 +0000 (UTC) Received: by mail-qv1-f67.google.com with SMTP id q19so1487843qvy.9 for ; Thu, 05 Dec 2019 08:22:34 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=subject:from:to:cc:date:message-id:in-reply-to:references :user-agent:mime-version:content-transfer-encoding; bh=qdguelrx2s1Wt3tWO+WfWlC6LCbWX2daKxv86hu6t5o=; b=Y5UBAwIq4OWO3ExNVc24wmptqgL8EocYFQM9GD1xFSzSxsBJ/d5nPiz0qTX/phkTYp chHI8pBVIY+eKRZHgkJCYwJITEPS0uexD4dGoofAvXArMCb8YNk1gc5LpI9z04dJ6EnQ NwqCiNVIQWBF43cRB0hpQfDoAmD64O8aUsW71Clv+JAhwpr3lKxncd2aSC0TVlLFrEHz ZR9wnwccG43MrICtusZhtAzliDoaS+OK04F1MXSQoubf9QZJ66Z1JV0YpcbA/tXbmgTF Us/iVbsrFGQbiReK/s3yCeIcv6YFiqduhX3N5dTBjvAOqFiVfhuOHJAiLd6ktlA1ka79 BEpg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:subject:from:to:cc:date:message-id:in-reply-to :references:user-agent:mime-version:content-transfer-encoding; bh=qdguelrx2s1Wt3tWO+WfWlC6LCbWX2daKxv86hu6t5o=; b=sXCsVXm9QGcIWRclvkOpCKaTKpSWesp52ZuzcV+RMeJZHQN4b4mbbmhnTKRxAM+XHO mHV64r4MksSPI41U4D5gNfGcsgGfXoXOytjTjSBbOMys9K7OIGoclrjGHs7W3V563+km NTOPXM9uh8apC4EfTkR3sRYODY6o2bp2a0dTWLKeQC88uePwfiXI2ZROI6/srwPs0hcM c7Sc3x/4nn6ytlXDGPOsf4Akn06mBp54TuiN1IWehdL72wJDgChQaKCzpOiP2K3bzdyy D8+Xvt+YWdvCY6fLRRFErjdm3TOJ1iKAUnAxl98ggbd3vqfff9sQ61jmgWfC2uMwUnjR xhxA== X-Gm-Message-State: APjAAAX+7lyzwHXH+DUzCNnNFHLIV5nSSVmL6Ee71r3DwCw4gTUKISAb DLkgwwLCs1WQXChrxjWLOPI= X-Google-Smtp-Source: APXvYqyUS56n7YGvPTV8wc+PhM19cVoX5iTOZF7S9Cf9suV0sNP8auM3g2jdOkVHjJWemqlPIzfaNg== X-Received: by 2002:a0c:fac1:: with SMTP id p1mr8325843qvo.231.1575562953446; Thu, 05 Dec 2019 08:22:33 -0800 (PST) Received: from localhost.localdomain ([2001:470:b:9c3:9e5c:8eff:fe4f:f2d0]) by smtp.gmail.com with ESMTPSA id t15sm5029696qkt.30.2019.12.05.08.22.30 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Thu, 05 Dec 2019 08:22:32 -0800 (PST) Subject: [PATCH v15 3/7] mm: Add function __putback_isolated_page From: Alexander Duyck To: kvm@vger.kernel.org, mst@redhat.com, linux-kernel@vger.kernel.org, willy@infradead.org, mhocko@kernel.org, linux-mm@kvack.org, akpm@linux-foundation.org, mgorman@techsingularity.net, vbabka@suse.cz Cc: yang.zhang.wz@gmail.com, nitesh@redhat.com, konrad.wilk@oracle.com, david@redhat.com, pagupta@redhat.com, riel@surriel.com, lcapitulino@redhat.com, dave.hansen@intel.com, wei.w.wang@intel.com, aarcange@redhat.com, pbonzini@redhat.com, dan.j.williams@intel.com, alexander.h.duyck@linux.intel.com, osalvador@suse.de Date: Thu, 05 Dec 2019 08:22:30 -0800 Message-ID: <20191205162230.19548.70198.stgit@localhost.localdomain> In-Reply-To: <20191205161928.19548.41654.stgit@localhost.localdomain> References: <20191205161928.19548.41654.stgit@localhost.localdomain> User-Agent: StGit/0.17.1-dirty MIME-Version: 1.0 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: From: Alexander Duyck There are cases where we would benefit from avoiding having to go through the allocation and free cycle to return an isolated page. Examples for this might include page poisoning in which we isolate a page and then put it back in the free list without ever having actually allocated it. This will enable us to also avoid notifiers for the future free page reporting which will need to avoid retriggering page reporting when returning pages that have been reported on. Signed-off-by: Alexander Duyck Acked-by: David Hildenbrand --- mm/internal.h | 1 + mm/page_alloc.c | 24 ++++++++++++++++++++++++ mm/page_isolation.c | 6 ++---- 3 files changed, 27 insertions(+), 4 deletions(-) diff --git a/mm/internal.h b/mm/internal.h index 3cf20ab3ca01..e1c908d0bf83 100644 --- a/mm/internal.h +++ b/mm/internal.h @@ -157,6 +157,7 @@ static inline struct page *pageblock_pfn_to_page(unsigned long start_pfn, } extern int __isolate_free_page(struct page *page, unsigned int order); +extern void __putback_isolated_page(struct page *page, unsigned int order); extern void memblock_free_pages(struct page *page, unsigned long pfn, unsigned int order); extern void __free_pages_core(struct page *page, unsigned int order); diff --git a/mm/page_alloc.c b/mm/page_alloc.c index e0a7895300fb..500b242c6f7f 100644 --- a/mm/page_alloc.c +++ b/mm/page_alloc.c @@ -3228,6 +3228,30 @@ int __isolate_free_page(struct page *page, unsigned int order) return 1UL << order; } +/** + * __putback_isolated_page - Return a now-isolated page back where we got it + * @page: Page that was isolated + * @order: Order of the isolated page + * + * This function is meant to return a page pulled from the free lists via + * __isolate_free_page back to the free lists they were pulled from. + */ +void __putback_isolated_page(struct page *page, unsigned int order) +{ + struct zone *zone = page_zone(page); + unsigned long pfn; + unsigned int mt; + + /* zone lock should be held when this function is called */ + lockdep_assert_held(&zone->lock); + + pfn = page_to_pfn(page); + mt = get_pfnblock_migratetype(page, pfn); + + /* Return isolated page to tail of freelist. */ + __free_one_page(page, pfn, zone, order, mt); +} + /* * Update NUMA hit/miss statistics * diff --git a/mm/page_isolation.c b/mm/page_isolation.c index 04ee1663cdbe..d93d2be0070f 100644 --- a/mm/page_isolation.c +++ b/mm/page_isolation.c @@ -134,13 +134,11 @@ static void unset_migratetype_isolate(struct page *page, unsigned migratetype) __mod_zone_freepage_state(zone, nr_pages, migratetype); } set_pageblock_migratetype(page, migratetype); + if (isolated_page) + __putback_isolated_page(page, order); zone->nr_isolate_pageblock--; out: spin_unlock_irqrestore(&zone->lock, flags); - if (isolated_page) { - post_alloc_hook(page, order, __GFP_MOVABLE); - __free_pages(page, order); - } } static inline struct page *