From patchwork Thu Jul 30 19:50:30 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Peter Xu X-Patchwork-Id: 11693609 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 7B0DD912 for ; Thu, 30 Jul 2020 19:50:39 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 43D1821883 for ; Thu, 30 Jul 2020 19:50:39 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b="WIiasUez" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 43D1821883 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=redhat.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 292EE6B0025; Thu, 30 Jul 2020 15:50:38 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 1F5B26B0027; Thu, 30 Jul 2020 15:50:38 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 099C26B0028; Thu, 30 Jul 2020 15:50:38 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0231.hostedemail.com [216.40.44.231]) by kanga.kvack.org (Postfix) with ESMTP id E26CA6B0025 for ; Thu, 30 Jul 2020 15:50:37 -0400 (EDT) Received: from smtpin03.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay02.hostedemail.com (Postfix) with ESMTP id 966523628 for ; Thu, 30 Jul 2020 19:50:37 +0000 (UTC) X-FDA: 77095784514.03.stamp66_5d034bc26f7d Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin03.hostedemail.com (Postfix) with ESMTP id 6420228A4E8 for ; Thu, 30 Jul 2020 19:50:37 +0000 (UTC) X-Spam-Summary: 1,0,0,,d41d8cd98f00b204,peterx@redhat.com,,RULES_HIT:30012:30054:30064,0,RBL:205.139.110.120:@redhat.com:.lbl8.mailshell.net-62.18.0.100 66.10.201.10;04y8fczgsii1x19o8gqpssrwjgjosypp5gwgfbwr56srarq6xzae97wrq4kfbnd.whiudnehpqzyisgb33jh7f4e3xrtygcfodazf96xjs1zpqa76w8kqu9t3aqxeag.y-lbl8.mailshell.net-223.238.255.100,CacheIP:none,Bayesian:0.5,0.5,0.5,Netcheck:none,DomainCache:0,MSF:not bulk,SPF:ft,MSBL:0,DNSBL:none,Custom_rules:0:0:0,LFtime:23,LUA_SUMMARY:none X-HE-Tag: stamp66_5d034bc26f7d X-Filterd-Recvd-Size: 5563 Received: from us-smtp-1.mimecast.com (us-smtp-delivery-1.mimecast.com [205.139.110.120]) by imf07.hostedemail.com (Postfix) with ESMTP for ; Thu, 30 Jul 2020 19:50:36 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1596138636; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding; bh=WMx7rPC6Q+HnvqrAYe3ZZeavv1fx9j7NHfsvFzSl1ik=; b=WIiasUezsc+xbUe+jlDVjiMGcZKZo8XFOIMah6XSqt7IRSF+59mUZPtJE83WDrc3QJKgNV w4RvOhk0y8DgrkNnO5SFKlIRkYFnRhirbFmJcAwHQbe59gg2E8MTMrvLrtoH1cRBFTwqyw vApYmLj6Yeex0RijDe7vi7biuk4HdOg= Received: from mail-qv1-f71.google.com (mail-qv1-f71.google.com [209.85.219.71]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-268-cEuSItMdOlukPvec-FZ3gw-1; Thu, 30 Jul 2020 15:50:34 -0400 X-MC-Unique: cEuSItMdOlukPvec-FZ3gw-1 Received: by mail-qv1-f71.google.com with SMTP id d9so9609332qvl.10 for ; Thu, 30 Jul 2020 12:50:34 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:mime-version :content-transfer-encoding; bh=WMx7rPC6Q+HnvqrAYe3ZZeavv1fx9j7NHfsvFzSl1ik=; b=ik2rwDyiI/aXA6aWMlOI8iCA5/o07H39N1DHUgdsLW1qYt6flJd+8A26kM6el22RpY T8i8CFOyTujnQjTuaM8XYvLYlUMvdalnq1Xo3ucLRAEQj83ZhMQcognX/HFjhGGkCOS/ RwzbR5kVilkdfMbw6y6zA0z9JQ9CxYaj8vsoF1ZtW5HvRcV/ACBQQhUaLNse8pI2X1kl mTJHUn7Z+s7KoP+KiMrZfmLSbx0CZvcKTRPq6Rn+JyHa7e7klQvX05s08hnKXQaekMpv cvngYnk4GS7uTbJ5o3Jx+UJZf4bBXtuv7tEBeNFcUkrDS8K6txXUMNi65gU3AwAbe7DS iObA== X-Gm-Message-State: AOAM532ThSF9EHwv2+l3O6n68L99erMpiROj1WGfdP7oLdeQsoZXjnhG kcvJODRmKbxtpA5kaaKurDH0XPIh53DOACrXbGBju2Jewpz/6cPVsye45FUxNHKR7VEdPhnIGbB xKaZnJ5tYArw= X-Received: by 2002:a05:620a:573:: with SMTP id p19mr829850qkp.197.1596138634207; Thu, 30 Jul 2020 12:50:34 -0700 (PDT) X-Google-Smtp-Source: ABdhPJyes+QHXjYkXCVg4jtAK40KsI6a3DwfH2MyEjQS3M3oruwBTsCvIRuU+0/MvlglvFpChKuocA== X-Received: by 2002:a05:620a:573:: with SMTP id p19mr829825qkp.197.1596138633915; Thu, 30 Jul 2020 12:50:33 -0700 (PDT) Received: from xz-x1.redhat.com ([2607:9880:19c8:6f::1f4f]) by smtp.gmail.com with ESMTPSA id b131sm5024856qkc.121.2020.07.30.12.50.31 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 30 Jul 2020 12:50:32 -0700 (PDT) From: Peter Xu To: linux-kernel@vger.kernel.org, linux-mm@kvack.org Cc: peterx@redhat.com, Andrew Morton , Andrea Arcangeli , Mike Kravetz Subject: [PATCH] mm/hugetlb: Fix calculation of adjust_range_if_pmd_sharing_possible Date: Thu, 30 Jul 2020 15:50:30 -0400 Message-Id: <20200730195030.60616-1-peterx@redhat.com> X-Mailer: git-send-email 2.26.2 MIME-Version: 1.0 X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: redhat.com X-Rspamd-Queue-Id: 6420228A4E8 X-Spamd-Result: default: False [0.00 / 100.00] X-Rspamd-Server: rspam03 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: This is found by code observation only. Firstly, the worst case scenario should assume the whole range was covered by pmd sharing. The old algorithm might not work as expected for ranges like (1g-2m, 1g+2m), where the adjusted range should be (0, 1g+2m) but the expected range should be (0, 2g). Since at it, remove the loop since it should not be required. With that, the new code should be faster too when the invalidating range is huge. CC: Andrea Arcangeli CC: Mike Kravetz CC: Andrew Morton CC: linux-mm@kvack.org CC: linux-kernel@vger.kernel.org Signed-off-by: Peter Xu --- mm/hugetlb.c | 27 +++++++++++++-------------- 1 file changed, 13 insertions(+), 14 deletions(-) diff --git a/mm/hugetlb.c b/mm/hugetlb.c index 4645f1441d32..0e5a0512c13c 100644 --- a/mm/hugetlb.c +++ b/mm/hugetlb.c @@ -43,6 +43,9 @@ #include #include "internal.h" +#define MAX(a,b) (((a)>(b))?(a):(b)) +#define MIN(a,b) (((a)<(b))?(a):(b)) + int hugetlb_max_hstate __read_mostly; unsigned int default_hstate_idx; struct hstate hstates[HUGE_MAX_HSTATE]; @@ -5321,25 +5324,21 @@ static bool vma_shareable(struct vm_area_struct *vma, unsigned long addr) void adjust_range_if_pmd_sharing_possible(struct vm_area_struct *vma, unsigned long *start, unsigned long *end) { - unsigned long check_addr; + unsigned long a_start, a_end; if (!(vma->vm_flags & VM_MAYSHARE)) return; - for (check_addr = *start; check_addr < *end; check_addr += PUD_SIZE) { - unsigned long a_start = check_addr & PUD_MASK; - unsigned long a_end = a_start + PUD_SIZE; + /* Extend the range to be PUD aligned for a worst case scenario */ + a_start = ALIGN_DOWN(*start, PUD_SIZE); + a_end = ALIGN(*end, PUD_SIZE); - /* - * If sharing is possible, adjust start/end if necessary. - */ - if (range_in_vma(vma, a_start, a_end)) { - if (a_start < *start) - *start = a_start; - if (a_end > *end) - *end = a_end; - } - } + /* + * Intersect the range with the vma range, since pmd sharing won't be + * across vma after all + */ + *start = MAX(vma->vm_start, a_start); + *end = MIN(vma->vm_end, a_end); } /*