From patchwork Mon May 11 02:12:04 2015 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Zhihui Zhang X-Patchwork-Id: 6372501 Return-Path: X-Original-To: patchwork-kvm@patchwork.kernel.org Delivered-To: patchwork-parsemail@patchwork1.web.kernel.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.136]) by patchwork1.web.kernel.org (Postfix) with ESMTP id 7E68B9F32B for ; Mon, 11 May 2015 02:12:26 +0000 (UTC) Received: from mail.kernel.org (localhost [127.0.0.1]) by mail.kernel.org (Postfix) with ESMTP id 8A9C520382 for ; Mon, 11 May 2015 02:12:25 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id 9D6232037F for ; Mon, 11 May 2015 02:12:24 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1751878AbbEKCMV (ORCPT ); Sun, 10 May 2015 22:12:21 -0400 Received: from mail-qc0-f180.google.com ([209.85.216.180]:35288 "EHLO mail-qc0-f180.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751705AbbEKCMU (ORCPT ); Sun, 10 May 2015 22:12:20 -0400 Received: by qcbgu10 with SMTP id gu10so62556488qcb.2 for ; Sun, 10 May 2015 19:12:19 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=from:to:cc:subject:date:message-id; bh=xFXcUPmkkHav4sbCWFN7vrWja0q2NHW5lRLu5fXT/B0=; b=azENXEAczafilel2zHUww93HHiHVqLyAM90c26LN3rGXaNEp7lScR8/96Lv2YWFNl7 rArtv0zipudyTIQSwlPupd9aEHYGd2itHWVZCVguTGIdOccL5pxS4NrAWyRupOw4h0VO 3nY7t2YKcEhjl5mgtzjbUU6HYcFKm8o3AcssOok0BYtiLpfW5yomN27Mo8sm7R2A2+06 q4/OhW/l48/MM0ab37WbIr0vv4Q5XzUtEo1R92jkgmBKoSDinXz8sB0DJyl5idPUoDS3 zl5IHNOuYCrdiC5cD75otoHPoGc/k7XabHFayUlTCuP7Tzg4sj+JBwvta7pzQPGfkV8w 0jyw== X-Received: by 10.140.91.246 with SMTP id z109mr10572614qgd.39.1431310339893; Sun, 10 May 2015 19:12:19 -0700 (PDT) Received: from localhost.localdomain (c-24-3-138-198.hsd1.pa.comcast.net. [24.3.138.198]) by mx.google.com with ESMTPSA id e78sm9450981qhc.0.2015.05.10.19.12.17 (version=TLSv1.2 cipher=ECDHE-RSA-AES128-SHA bits=128/128); Sun, 10 May 2015 19:12:18 -0700 (PDT) From: Zhihui Zhang To: akpm@linux-foundation.org, hannes@cmpxchg.org, mhocko@suse.cz Cc: kvm@vger.kernel.org Subject: [PATCH] Rename RECLAIM_SWAP to RECLAIM_UNMAP. Date: Sun, 10 May 2015 22:12:04 -0400 Message-Id: <1431310324-8351-1-git-send-email-zzhsuny@gmail.com> X-Mailer: git-send-email 2.1.4 Sender: kvm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org X-Spam-Status: No, score=-6.8 required=5.0 tests=BAYES_00, DKIM_ADSP_CUSTOM_MED, DKIM_SIGNED, FREEMAIL_FROM, RCVD_IN_DNSWL_HI, T_DKIM_INVALID, T_RP_MATCHES_RCVD, UNPARSEABLE_RELAY autolearn=ham version=3.3.1 X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on mail.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP the name SWAP implies that we are dealing with anonymous pages only. In fact, the original patch that introduced the min_unmapped_ratio logic was to fix an issue related to file pages. Rename it to RECLAIM_UNMAP to match what does. Historically, commit renamed .may_swap to .may_unmap, leaving RECLAIM_SWAP behind. commit <2e2e42598908> reintroduced .may_swap for memory controller. Signed-off-by: Zhihui Zhang --- mm/vmscan.c | 12 ++++++------ 1 file changed, 6 insertions(+), 6 deletions(-) diff --git a/mm/vmscan.c b/mm/vmscan.c index 5e8eadd..15328de 100644 --- a/mm/vmscan.c +++ b/mm/vmscan.c @@ -3596,7 +3596,7 @@ int zone_reclaim_mode __read_mostly; #define RECLAIM_OFF 0 #define RECLAIM_ZONE (1<<0) /* Run shrink_inactive_list on the zone */ #define RECLAIM_WRITE (1<<1) /* Writeout pages during reclaim */ -#define RECLAIM_SWAP (1<<2) /* Swap pages out during reclaim */ +#define RECLAIM_UNMAP (1<<2) /* Unmap pages during reclaim */ /* * Priority for ZONE_RECLAIM. This determines the fraction of pages @@ -3638,12 +3638,12 @@ static long zone_pagecache_reclaimable(struct zone *zone) long delta = 0; /* - * If RECLAIM_SWAP is set, then all file pages are considered + * If RECLAIM_UNMAP is set, then all file pages are considered * potentially reclaimable. Otherwise, we have to worry about * pages like swapcache and zone_unmapped_file_pages() provides * a better estimate */ - if (zone_reclaim_mode & RECLAIM_SWAP) + if (zone_reclaim_mode & RECLAIM_UNMAP) nr_pagecache_reclaimable = zone_page_state(zone, NR_FILE_PAGES); else nr_pagecache_reclaimable = zone_unmapped_file_pages(zone); @@ -3674,15 +3674,15 @@ static int __zone_reclaim(struct zone *zone, gfp_t gfp_mask, unsigned int order) .order = order, .priority = ZONE_RECLAIM_PRIORITY, .may_writepage = !!(zone_reclaim_mode & RECLAIM_WRITE), - .may_unmap = !!(zone_reclaim_mode & RECLAIM_SWAP), + .may_unmap = !!(zone_reclaim_mode & RECLAIM_UNMAP), .may_swap = 1, }; cond_resched(); /* - * We need to be able to allocate from the reserves for RECLAIM_SWAP + * We need to be able to allocate from the reserves for RECLAIM_UNMAP * and we also need to be able to write out pages for RECLAIM_WRITE - * and RECLAIM_SWAP. + * and RECLAIM_UNMAP. */ p->flags |= PF_MEMALLOC | PF_SWAPWRITE; lockdep_set_current_reclaim_state(gfp_mask);