From patchwork Thu Sep 13 19:54:50 2012 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Rik van Riel X-Patchwork-Id: 1454071 Return-Path: X-Original-To: patchwork-kvm@patchwork.kernel.org Delivered-To: patchwork-process-083081@patchwork2.kernel.org Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by patchwork2.kernel.org (Postfix) with ESMTP id 863E8DF24C for ; Thu, 13 Sep 2012 19:54:58 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1752419Ab2IMTyy (ORCPT ); Thu, 13 Sep 2012 15:54:54 -0400 Received: from mx1.redhat.com ([209.132.183.28]:64940 "EHLO mx1.redhat.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751715Ab2IMTyw (ORCPT ); Thu, 13 Sep 2012 15:54:52 -0400 Received: from int-mx02.intmail.prod.int.phx2.redhat.com (int-mx02.intmail.prod.int.phx2.redhat.com [10.5.11.12]) by mx1.redhat.com (8.14.4/8.14.4) with ESMTP id q8DJsiDK012965 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK); Thu, 13 Sep 2012 15:54:44 -0400 Received: from cuia.bos.redhat.com (cuia.bos.redhat.com [10.16.184.35]) by int-mx02.intmail.prod.int.phx2.redhat.com (8.13.8/8.13.8) with ESMTP id q8DJsgZD018138; Thu, 13 Sep 2012 15:54:43 -0400 Date: Thu, 13 Sep 2012 15:54:50 -0400 From: Rik van Riel To: Richard Davies Cc: Mel Gorman , Avi Kivity , Shaohua Li , qemu-devel@nongnu.org, kvm@vger.kernel.org, linux-mm@kvack.org Subject: [PATCH -v2 2/2] make the compaction "skip ahead" logic robust Message-ID: <20120913155450.7634148f@cuia.bos.redhat.com> In-Reply-To: <20120913154824.44cc0e28@cuia.bos.redhat.com> References: <20120822124032.GA12647@alpha.arachsys.com> <5034D437.8070106@redhat.com> <20120822144150.GA1400@alpha.arachsys.com> <5034F8F4.3080301@redhat.com> <20120825174550.GA8619@alpha.arachsys.com> <50391564.30401@redhat.com> <20120826105803.GA377@alpha.arachsys.com> <20120906092039.GA19234@alpha.arachsys.com> <20120912105659.GA23818@alpha.arachsys.com> <20120912122541.GO11266@suse.de> <20120912164615.GA14173@alpha.arachsys.com> <20120913154824.44cc0e28@cuia.bos.redhat.com> Organization: Red Hat, Inc Mime-Version: 1.0 X-Scanned-By: MIMEDefang 2.67 on 10.5.11.12 Sender: kvm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Argh. And of course I send out the version from _before_ the compile test, instead of the one after! I am not used to caffeine any more and have had way too much tea... ---8<--- Make the "skip ahead" logic in compaction resistant to compaction wrapping around to the end of the zone. This can lead to less efficient compaction when one thread has wrapped around to the end of the zone, and another simultaneous compactor has not done so yet. However, it should ensure that we do not suffer quadratic behaviour any more. Signed-off-by: Rik van Riel Reported-by: Richard Davies --- To unsubscribe from this list: send the line "unsubscribe kvm" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html diff --git a/mm/compaction.c b/mm/compaction.c index 771775d..0656759 100644 --- a/mm/compaction.c +++ b/mm/compaction.c @@ -431,6 +431,24 @@ static bool suitable_migration_target(struct page *page) } /* + * We scan the zone in a circular fashion, starting at + * zone->compact_cached_free_pfn. Be careful not to skip if + * one compacting thread has just wrapped back to the end of the + * zone, but another thread has not. + */ +static bool compaction_may_skip(struct zone *zone, + struct compact_control *cc) +{ + if (!cc->wrapped && zone->compact_cached_free_pfn < cc->start_free_pfn) + return true; + + if (cc->wrapped && zone->compact_cached_free_pfn > cc->start_free_pfn) + return true; + + return false; +} + +/* * Based on information in the current compact_control, find blocks * suitable for isolating free pages from and then isolate them. */ @@ -471,13 +489,9 @@ static void isolate_freepages(struct zone *zone, /* * Skip ahead if another thread is compacting in the area - * simultaneously. If we wrapped around, we can only skip - * ahead if zone->compact_cached_free_pfn also wrapped to - * above our starting point. + * simultaneously, and has finished with this page block. */ - if (cc->order > 0 && (!cc->wrapped || - zone->compact_cached_free_pfn > - cc->start_free_pfn)) + if (cc->order > 0 && compaction_may_skip(zone, cc)) pfn = min(pfn, zone->compact_cached_free_pfn); if (!pfn_valid(pfn))