From patchwork Wed Feb 19 01:27:37 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mina Almasry X-Patchwork-Id: 11390179 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 1C0AB1395 for ; Wed, 19 Feb 2020 01:28:01 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id BE81E207FD for ; Wed, 19 Feb 2020 01:28:00 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="ifPDsbrQ" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org BE81E207FD Authentication-Results: mail.kernel.org; dmarc=fail (p=reject dis=none) header.from=google.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id E9BF36B0007; Tue, 18 Feb 2020 20:27:59 -0500 (EST) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id E4CE06B0008; Tue, 18 Feb 2020 20:27:59 -0500 (EST) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id D62776B000A; Tue, 18 Feb 2020 20:27:59 -0500 (EST) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0245.hostedemail.com [216.40.44.245]) by kanga.kvack.org (Postfix) with ESMTP id BEB496B0007 for ; Tue, 18 Feb 2020 20:27:59 -0500 (EST) Received: from smtpin11.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay03.hostedemail.com (Postfix) with ESMTP id 7638B8248047 for ; Wed, 19 Feb 2020 01:27:59 +0000 (UTC) X-FDA: 76505140278.11.chin27_62d15d5860134 X-Spam-Summary: 2,0,0,494aaf1080ca8252,d41d8cd98f00b204,3ho9mxgskcgcfqrfxwdrnsflttlqj.htrqnszc-rrpafhp.twl@flex--almasrymina.bounces.google.com,:akpm@linux-foundation.org::linux-kernel@vger.kernel.org:cai@lca.pw:mike.kravetz@oracle.com:almasrymina@google.com,RULES_HIT:1:2:41:69:152:355:379:387:541:800:960:966:973:982:988:989:1260:1277:1313:1314:1345:1359:1431:1437:1516:1518:1593:1594:1605:1730:1747:1777:1792:2194:2196:2199:2200:2393:2553:2559:2562:2693:2736:2902:3138:3139:3140:3141:3142:3152:3865:3866:3867:3868:3870:3871:3872:3874:4051:4321:4385:4605:5007:6119:6261:6653:7550:7875:7903:7904:8660:9592:9969:10004:11026:11232:11473:11658:11914:12043:12291:12296:12297:12438:12555:12679:12683:12895:12986:13053:13141:13148:13172:13221:13229:13230:14096:14097:14394:14659:21220:21444:21451:21627:21939:21990:30012:30034:30054:30070:30090,0,RBL:209.85.221.201:@flex--almasrymina.bounces.google.com:.lbl8.mailshell.net-62.18.0.100 66.100.201.100,CacheIP:none,Bayesian:0.5,0.5,0.5,Netcheck:n one,Doma X-HE-Tag: chin27_62d15d5860134 X-Filterd-Recvd-Size: 10761 Received: from mail-vk1-f201.google.com (mail-vk1-f201.google.com [209.85.221.201]) by imf30.hostedemail.com (Postfix) with ESMTP for ; Wed, 19 Feb 2020 01:27:58 +0000 (UTC) Received: by mail-vk1-f201.google.com with SMTP id s16so1075466vke.12 for ; Tue, 18 Feb 2020 17:27:58 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=date:in-reply-to:message-id:mime-version:references:subject:from:cc; bh=NyXzgKqqlwj77LRuwVimuxxnchYdD60uwJY9TwWTfVQ=; b=ifPDsbrQIK1DJQT7gYr+3k2jKIzwCslljV05SmxfXu5wb9BG8KCrC99qq8al5JWPWJ +cZS73agavKV2UYS2um59GDVhCXRtB0Jwfu1y2TFc21l8IAVD8FvBDbJb3ZZaXdHaaXX svhVcWp0vVkPqijCzOWop8vM8n4bfy2gRWteE3qwvhYh50lEgKaKzeyUhf5opvWnuVbq wU1NjZl/qX399Yg6XQD1JTwMCc/T+CIc69THlXMKhXhBdkn+grnDHynYO+33otbekWLg BdKuDJO6dintg8t2VRKgO+TKA/JbL9VAVn3k2Gh6vFidPhNcQZmqqJKF/XRx0ea18+nA Vkyg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:cc; bh=NyXzgKqqlwj77LRuwVimuxxnchYdD60uwJY9TwWTfVQ=; b=GQsHZcS5bbBzCmy7Tul5A4pBRnn4gBHOFlt39GJQoC6igWgxNlXfzOQ0WZjId7/0NQ n46ZGT/J3vB5by0iw1hjp+aN4GsP+etZeMc1fSs0cYm7kTP5ETvHHpQNVF925QQI7t// M7SpIcO0NdECxCk5RJQO1+xUi173nLkLnrsLnih2jq2zIWIfCCz8l7xRUpJBetH81jCA sHfaoI60lMjdrq/upgSs6mf13seDxfdkZYdOPecaziWkaJMwgILV0KNM9Wx4ZNLDAAbO CgLkT6U7A2LAI6DXxC4VIIXm4sD5Yv4lr3aRzXmjSySitRyuhCd9yMO56TMX6Xei4A3m 3UcA== X-Gm-Message-State: APjAAAX/QUbwAguFGNhit7O/AHx+C8kFiDBaxGFVSZpjTlenQ2b7OAdz asbbb805PXpRYDSuYICBuUM6Ck9Mo2TKoi/fug== X-Google-Smtp-Source: APXvYqwjkuvT6dWYO7+hiYJqkQ5AejmFmZcOjhVWPjc7QQVrsotE37gkApVWZhNMQ4bjdF+RaP3BLBsSENnaSWXAaQ== X-Received: by 2002:a67:bb18:: with SMTP id m24mr12972747vsn.92.1582075678201; Tue, 18 Feb 2020 17:27:58 -0800 (PST) Date: Tue, 18 Feb 2020 17:27:37 -0800 In-Reply-To: <67aa82a8-3c8d-d1eb-7e83-4f722b1eeb2a@oracle.com> Message-Id: <20200219012736.20363-1-almasrymina@google.com> Mime-Version: 1.0 References: <67aa82a8-3c8d-d1eb-7e83-4f722b1eeb2a@oracle.com> X-Mailer: git-send-email 2.25.0.265.gbab2e86ba0-goog Subject: [PATCH v2] mm/hugetlb: Fix file_region entry allocations From: Mina Almasry Cc: Andrew Morton , linux-mm@kvack.org, linux-kernel@vger.kernel.org, Qian Cai , mike.kravetz@oracle.com, Mina Almasry X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Commit a9e443086489e ("hugetlb: disable region_add file_region coalescing") introduced a bug with adding file_region entries that is fixed here: 1. Refactor file_region entry allocation logic into 1 function called from region_add and region_chg since the code is now identical. 2. region_chg only modifies resv->adds_in_progress after the regions have been allocated. In the past it used to increment adds_in_progress and then drop the lock, which would confuse racing region_add calls into thinking they need to allocate entries when they are not allowed. 3. In region_add, only try to allocate regions when actual_regions_needed > in_regions_needed. This is not causing a bug but is better for cleanliness and reasoning about the code. Tested using ltp hugemmap0* tests, and libhugetlbfs tests. Reported-by: Qian Cai Signed-off-by: Mina Almasry Fixes: Commit a9e443086489e ("hugetlb: disable region_add file_region coalescing") --- Changes in v2: - Added __must_hold. - Fixed formatting issues due to clang format adding space after list_for_each_entry_safe --- mm/hugetlb.c | 150 +++++++++++++++++++++++++-------------------------- 1 file changed, 75 insertions(+), 75 deletions(-) -- 2.25.0.265.gbab2e86ba0-goog diff --git a/mm/hugetlb.c b/mm/hugetlb.c index 8171d2211be77..94e27dfec0435 100644 --- a/mm/hugetlb.c +++ b/mm/hugetlb.c @@ -439,6 +439,67 @@ static long add_reservation_in_range(struct resv_map *resv, long f, long t, return add; } +/* Must be called with resv->lock acquired. Will drop lock to allocate entries. + */ +static int allocate_file_region_entries(struct resv_map *resv, + int regions_needed) + __must_hold(&resv->lock) +{ + struct list_head allocated_regions; + int to_allocate = 0, i = 0; + struct file_region *trg = NULL, *rg = NULL; + + VM_BUG_ON(regions_needed < 0); + + INIT_LIST_HEAD(&allocated_regions); + + /* + * Check for sufficient descriptors in the cache to accommodate + * the number of in progress add operations plus regions_needed. + * + * This is a while loop because when we drop the lock, some other call + * to region_add or region_del may have consumed some region_entries, + * so we keep looping here until we finally have enough entries for + * (adds_in_progress + regions_needed). + */ + while (resv->region_cache_count < + (resv->adds_in_progress + regions_needed)) { + to_allocate = resv->adds_in_progress + regions_needed - + resv->region_cache_count; + + /* At this point, we should have enough entries in the cache + * for all the existings adds_in_progress. We should only be + * needing to allocate for regions_needed. + */ + VM_BUG_ON(resv->region_cache_count < resv->adds_in_progress); + + spin_unlock(&resv->lock); + for (i = 0; i < to_allocate; i++) { + trg = kmalloc(sizeof(*trg), GFP_KERNEL); + if (!trg) + goto out_of_memory; + list_add(&trg->link, &allocated_regions); + } + + spin_lock(&resv->lock); + + list_for_each_entry_safe(rg, trg, &allocated_regions, link) { + list_del(&rg->link); + list_add(&rg->link, &resv->region_cache); + resv->region_cache_count++; + } + } + + return 0; + +out_of_memory: + list_for_each_entry_safe(rg, trg, &allocated_regions, link) { + list_del(&rg->link); + kfree(rg); + } + return -ENOMEM; +} + /* * Add the huge page range represented by [f, t) to the reserve * map. Regions will be taken from the cache to fill in this range. @@ -460,11 +521,7 @@ static long region_add(struct resv_map *resv, long f, long t, long in_regions_needed, struct hstate *h, struct hugetlb_cgroup *h_cg) { - long add = 0, actual_regions_needed = 0, i = 0; - struct file_region *trg = NULL, *rg = NULL; - struct list_head allocated_regions; - - INIT_LIST_HEAD(&allocated_regions); + long add = 0, actual_regions_needed = 0; spin_lock(&resv->lock); retry: @@ -476,34 +533,24 @@ static long region_add(struct resv_map *resv, long f, long t, /* * Check for sufficient descriptors in the cache to accommodate * this add operation. Note that actual_regions_needed may be greater - * than in_regions_needed. In this case, we need to make sure that we + * than in_regions_needed, as the resv_map may have been modified since + * the region_chg call. In this case, we need to make sure that we * allocate extra entries, such that we have enough for all the * existing adds_in_progress, plus the excess needed for this * operation. */ - if (resv->region_cache_count < - resv->adds_in_progress + - (actual_regions_needed - in_regions_needed)) { + if (actual_regions_needed > in_regions_needed && + resv->region_cache_count < + resv->adds_in_progress + + (actual_regions_needed - in_regions_needed)) { /* region_add operation of range 1 should never need to * allocate file_region entries. */ VM_BUG_ON(t - f <= 1); - /* Must drop lock to allocate a new descriptor. */ - spin_unlock(&resv->lock); - for (i = 0; i < (actual_regions_needed - in_regions_needed); - i++) { - trg = kmalloc(sizeof(*trg), GFP_KERNEL); - if (!trg) - goto out_of_memory; - list_add(&trg->link, &allocated_regions); - } - spin_lock(&resv->lock); - - list_for_each_entry_safe(rg, trg, &allocated_regions, link) { - list_del(&rg->link); - list_add(&rg->link, &resv->region_cache); - resv->region_cache_count++; + if (allocate_file_region_entries( + resv, actual_regions_needed - in_regions_needed)) { + return -ENOMEM; } goto retry; @@ -516,13 +563,6 @@ static long region_add(struct resv_map *resv, long f, long t, spin_unlock(&resv->lock); VM_BUG_ON(add < 0); return add; - -out_of_memory: - list_for_each_entry_safe(rg, trg, &allocated_regions, link) { - list_del(&rg->link); - kfree(rg); - } - return -ENOMEM; } /* @@ -548,11 +588,7 @@ static long region_add(struct resv_map *resv, long f, long t, static long region_chg(struct resv_map *resv, long f, long t, long *out_regions_needed) { - struct file_region *trg = NULL, *rg = NULL; - long chg = 0, i = 0, to_allocate = 0; - struct list_head allocated_regions; - - INIT_LIST_HEAD(&allocated_regions); + long chg = 0; spin_lock(&resv->lock); @@ -563,49 +599,13 @@ static long region_chg(struct resv_map *resv, long f, long t, if (*out_regions_needed == 0) *out_regions_needed = 1; - resv->adds_in_progress += *out_regions_needed; - - /* - * Check for sufficient descriptors in the cache to accommodate - * the number of in progress add operations. - */ - while (resv->region_cache_count < resv->adds_in_progress) { - to_allocate = resv->adds_in_progress - resv->region_cache_count; - - /* Must drop lock to allocate a new descriptor. Note that even - * though we drop the lock here, we do not make another call to - * add_reservation_in_range after re-acquiring the lock. - * Essentially this branch makes sure that we have enough - * descriptors in the cache as suggested by the first call to - * add_reservation_in_range. If more regions turn out to be - * required, region_add will deal with it. - */ - spin_unlock(&resv->lock); - for (i = 0; i < to_allocate; i++) { - trg = kmalloc(sizeof(*trg), GFP_KERNEL); - if (!trg) - goto out_of_memory; - list_add(&trg->link, &allocated_regions); - } - - spin_lock(&resv->lock); + if (allocate_file_region_entries(resv, *out_regions_needed)) + return -ENOMEM; - list_for_each_entry_safe(rg, trg, &allocated_regions, link) { - list_del(&rg->link); - list_add(&rg->link, &resv->region_cache); - resv->region_cache_count++; - } - } + resv->adds_in_progress += *out_regions_needed; spin_unlock(&resv->lock); return chg; - -out_of_memory: - list_for_each_entry_safe(rg, trg, &allocated_regions, link) { - list_del(&rg->link); - kfree(rg); - } - return -ENOMEM; } /*