From patchwork Tue Feb 11 21:31:26 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mina Almasry X-Patchwork-Id: 11376937 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 7A8F314E3 for ; Tue, 11 Feb 2020 21:32:05 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 3B66220708 for ; Tue, 11 Feb 2020 21:32:05 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="aOJZdPwX" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 3B66220708 Authentication-Results: mail.kernel.org; dmarc=fail (p=reject dis=none) header.from=google.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id EECF86B0331; Tue, 11 Feb 2020 16:32:02 -0500 (EST) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id EA1026B0333; Tue, 11 Feb 2020 16:32:02 -0500 (EST) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id D40026B0334; Tue, 11 Feb 2020 16:32:02 -0500 (EST) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0050.hostedemail.com [216.40.44.50]) by kanga.kvack.org (Postfix) with ESMTP id B54B46B0331 for ; Tue, 11 Feb 2020 16:32:02 -0500 (EST) Received: from smtpin12.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay01.hostedemail.com (Postfix) with ESMTP id 6E3A7180AD806 for ; Tue, 11 Feb 2020 21:32:02 +0000 (UTC) X-FDA: 76479144084.12.shelf79_6910a81c28611 X-Spam-Summary: 2,0,0,3060adb7fdb1fe0a,d41d8cd98f00b204,3ur1dxgskcjau56ucbi627u08805y.w86527eh-664fuw4.8b0@flex--almasrymina.bounces.google.com,:mike.kravetz@oracle.com:shuah@kernel.org:almasrymina@google.com:rientjes@google.com:shakeelb@google.com:gthelen@google.com:akpm@linux-foundation.org:linux-kernel@vger.kernel.org::linux-kselftest@vger.kernel.org:cgroups@vger.kernel.org,RULES_HIT:41:152:355:379:541:800:960:966:973:988:989:1260:1277:1313:1314:1345:1359:1431:1437:1516:1518:1535:1544:1593:1594:1605:1711:1730:1747:1777:1792:2196:2198:2199:2200:2393:2559:2562:2693:2731:2902:3138:3139:3140:3141:3142:3152:3865:3866:3867:3868:3870:3871:3872:3874:4118:4321:4385:4605:5007:6119:6261:6653:7875:7903:8660:8908:9010:9969:10004:11026:11232:11473:11658:11914:12043:12291:12296:12297:12438:12555:12683:12895:13148:13161:13229:13230:14096:14097:14181:14394:14659:14721:21080:21444:21450:21525:21627:21740:21939:21966:21990:30012:30034:30054:30056:30070:30075,0,RBL:209.85.221.201:@flex--almas rymina.b X-HE-Tag: shelf79_6910a81c28611 X-Filterd-Recvd-Size: 7396 Received: from mail-vk1-f201.google.com (mail-vk1-f201.google.com [209.85.221.201]) by imf22.hostedemail.com (Postfix) with ESMTP for ; Tue, 11 Feb 2020 21:32:01 +0000 (UTC) Received: by mail-vk1-f201.google.com with SMTP id v188so4001643vkf.10 for ; Tue, 11 Feb 2020 13:32:01 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=Tp5wdAmRLkIevDsyGX8qw1zslE9kUlou69d6UeVm2JQ=; b=aOJZdPwXw5wDTC3lATyY8iHo6uNIabYE+R+1hpqOTNUV7xPYKnEljA6r2lXxhXF+ET gMyKne2fG04+d0phULyLgNp3tQZ1Zsztub/Tj4vA0LOAExk1NEJWwReKKhFMLvi4vvsS O3Zi+gFBvHO1WI5t4DSK2++PO2mSdB/sfBX1awbnNXbFEk+OClvwtD7j5aWw2YHYsGz9 sFlb44ignKhPiYtOKV4+Elq5/tbAHlOqm7r4rp08fXDAuZ0OvPYNCwajkGCRrKw4vx5d k81UCcoy0azk05btXS6wyVocfWBNidVLTCwOZM9nL8tyDmp0ZQHerhSSsYqeinZYTgWO pg7Q== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=Tp5wdAmRLkIevDsyGX8qw1zslE9kUlou69d6UeVm2JQ=; b=UVqMNKTn1UwUiIuSPQ0dl8Xe0qn1lgZZ1B9xHV/q2fOSbj1HccEOsfjOvVdyf49NvS QgzaaTPThSayPY14xXKEj6tSzv9ZxHv0/Xj7nmD8ypaBlAl3nyqja+NY6OHWLn1J78pM 4jY7CEZnBnT0/FhITIZqL+GVmiqYY9Pu4teB+lZ+JCgcoZ3B4NiCOj2M/scaMWmVbkJt sku/0qAp4HgTobzGW50duvvbbPtMrIxhB/mbHWEtnBcLl6ZJ4T6sFQP2pRsRTEdkWa3N 2F4pjjOBjXYSUJIHD5v1nooB21cP870VLKiDw8Fv+qCGHSI9cWoa6NKQ3s0Ibvl5cIBe CsqQ== X-Gm-Message-State: APjAAAXN5PNpjbiBzIcUUhWVKtA4WsMqUWNre+YnUFxd8DTi+wAzi9zV ALAQcRUfjdTzv2suDh5Mb7Rwqy1lyAYhG/SRiA== X-Google-Smtp-Source: APXvYqyz3nDr4N7U7KZZ53+YJmKSb+zlQMvDxMWbZrelDQ7p8ZqmlAstRX+tAzxWKdvF/eGuKkfAGp2PmxzHLwknig== X-Received: by 2002:ac5:c950:: with SMTP id s16mr5869367vkm.27.1581456721196; Tue, 11 Feb 2020 13:32:01 -0800 (PST) Date: Tue, 11 Feb 2020 13:31:26 -0800 In-Reply-To: <20200211213128.73302-1-almasrymina@google.com> Message-Id: <20200211213128.73302-7-almasrymina@google.com> Mime-Version: 1.0 References: <20200211213128.73302-1-almasrymina@google.com> X-Mailer: git-send-email 2.25.0.225.g125e21ebc7-goog Subject: [PATCH v12 7/9] hugetlb: support file_region coalescing again From: Mina Almasry To: mike.kravetz@oracle.com Cc: shuah@kernel.org, almasrymina@google.com, rientjes@google.com, shakeelb@google.com, gthelen@google.com, akpm@linux-foundation.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, linux-kselftest@vger.kernel.org, cgroups@vger.kernel.org X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: An earlier patch in this series disabled file_region coalescing in order to hang the hugetlb_cgroup uncharge info on the file_region entries. This patch re-adds support for coalescing of file_region entries. Essentially everytime we add an entry, we call a recursive function that tries to coalesce the added region with the regions next to it. The worst case call depth for this function is 3: one to coalesce with the region next to it, one to coalesce to the region prev, and one to reach the base case. This is an important performance optimization as private mappings add their entries page by page, and we could incur big performance costs for large mappings with lots of file_region entries in their resv_map. Signed-off-by: Mina Almasry Acked-by: David Rientjes --- Changes in v12: - Changed logic for coalescing. Instead of checking inline to coalesce with only the region on next or prev, we now have a recursive function that takes care of coalescing in both directions. - For testing this code I added a bunch of debug code that checks that the entries in the resv_map are coalesced appropriately. This passes with libhugetlbfs tests. --- mm/hugetlb.c | 85 ++++++++++++++++++++++++++++++++++++++++++++++++++++ 1 file changed, 85 insertions(+) -- 2.25.0.225.g125e21ebc7-goog diff --git a/mm/hugetlb.c b/mm/hugetlb.c index 2d62dd35399db..45219cb58ac71 100644 --- a/mm/hugetlb.c +++ b/mm/hugetlb.c @@ -276,6 +276,86 @@ static void record_hugetlb_cgroup_uncharge_info(struct hugetlb_cgroup *h_cg, #endif } +static bool has_same_uncharge_info(struct file_region *rg, + struct file_region *org) +{ +#ifdef CONFIG_CGROUP_HUGETLB + return rg && org && + rg->reservation_counter == org->reservation_counter && + rg->css == org->css; + +#else + return true; +#endif +} + +#ifdef CONFIG_DEBUG_VM +static void dump_resv_map(struct resv_map *resv) +{ + struct list_head *head = &resv->regions; + struct file_region *rg = NULL; + + pr_err("--------- start print resv_map ---------\n"); + list_for_each_entry(rg, head, link) { + pr_err("rg->from=%ld, rg->to=%ld, rg->reservation_counter=%px, rg->css=%px\n", + rg->from, rg->to, rg->reservation_counter, rg->css); + } + pr_err("--------- end print resv_map ---------\n"); +} + +/* Debug function to loop over the resv_map and make sure that coalescing is + * working. + */ +static void check_coalesce_bug(struct resv_map *resv) +{ + struct list_head *head = &resv->regions; + struct file_region *rg = NULL, *nrg = NULL; + + list_for_each_entry(rg, head, link) { + nrg = list_next_entry(rg, link); + + if (&nrg->link == head) + break; + + if (nrg->reservation_counter && nrg->from == rg->to && + nrg->reservation_counter == rg->reservation_counter && + nrg->css == rg->css) { + dump_resv_map(resv); + VM_BUG_ON(true); + } + } +} +#endif + +static void coalesce_file_region(struct resv_map *resv, struct file_region *rg) +{ + struct file_region *nrg = NULL, *prg = NULL; + + prg = list_prev_entry(rg, link); + if (&prg->link != &resv->regions && prg->to == rg->from && + has_same_uncharge_info(prg, rg)) { + prg->to = rg->to; + + list_del(&rg->link); + kfree(rg); + + coalesce_file_region(resv, prg); + return; + } + + nrg = list_next_entry(rg, link); + if (&nrg->link != &resv->regions && nrg->from == rg->to && + has_same_uncharge_info(nrg, rg)) { + nrg->from = rg->from; + + list_del(&rg->link); + kfree(rg); + + coalesce_file_region(resv, nrg); + return; + } +} + /* Must be called with resv->lock held. Calling this with count_only == true * will count the number of pages to be added but will not modify the linked * list. If regions_needed != NULL and count_only == true, then regions_needed @@ -327,6 +407,7 @@ static long add_reservation_in_range(struct resv_map *resv, long f, long t, record_hugetlb_cgroup_uncharge_info(h_cg, h, resv, nrg); list_add(&nrg->link, rg->link.prev); + coalesce_file_region(resv, nrg); } else if (regions_needed) *regions_needed += 1; } @@ -344,11 +425,15 @@ static long add_reservation_in_range(struct resv_map *resv, long f, long t, resv, last_accounted_offset, t); record_hugetlb_cgroup_uncharge_info(h_cg, h, resv, nrg); list_add(&nrg->link, rg->link.prev); + coalesce_file_region(resv, nrg); } else if (regions_needed) *regions_needed += 1; } VM_BUG_ON(add < 0); +#ifdef CONFIG_DEBUG_VM + check_coalesce_bug(resv); +#endif return add; }