From patchwork Sun Dec 1 21:22:38 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Peter Xu X-Patchwork-Id: 13889655 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id D17E2D49789 for ; Sun, 1 Dec 2024 21:23:05 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 408A86B0095; Sun, 1 Dec 2024 16:23:04 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 391796B0096; Sun, 1 Dec 2024 16:23:04 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 0F7946B0098; Sun, 1 Dec 2024 16:23:04 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0013.hostedemail.com [216.40.44.13]) by kanga.kvack.org (Postfix) with ESMTP id D6C5F6B0095 for ; Sun, 1 Dec 2024 16:23:03 -0500 (EST) Received: from smtpin06.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay06.hostedemail.com (Postfix) with ESMTP id 8E7B2AE8E1 for ; Sun, 1 Dec 2024 21:23:03 +0000 (UTC) X-FDA: 82847665026.06.2BEB567 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) by imf27.hostedemail.com (Postfix) with ESMTP id 245154000D for ; Sun, 1 Dec 2024 21:22:46 +0000 (UTC) Authentication-Results: imf27.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b=FQjhHuim; spf=pass (imf27.hostedemail.com: domain of peterx@redhat.com designates 170.10.129.124 as permitted sender) smtp.mailfrom=peterx@redhat.com; dmarc=pass (policy=none) header.from=redhat.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1733088173; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=4S7W+NNw/Z0asEvav/qetVxn0P5gIx5aAGF1EDJoEHQ=; b=v8Dz+Wf8CDfa8UbyL6vaqrNExqjBpzMSNSkcFkHzD9pIOPyLySkbZQ9TCLp6wVypc8CbyI KiDphWC+2PTGirS+9P1ywbl2OpYH9vZ8+7oPnptsr0ebeXz6BR+COLNO0f8hmfnQwZvAgK +61xpuwo/R2ZzR8aXMDNcAiil2je+NQ= ARC-Authentication-Results: i=1; imf27.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b=FQjhHuim; spf=pass (imf27.hostedemail.com: domain of peterx@redhat.com designates 170.10.129.124 as permitted sender) smtp.mailfrom=peterx@redhat.com; dmarc=pass (policy=none) header.from=redhat.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1733088173; a=rsa-sha256; cv=none; b=7u/cWnaHk3WDvT8dBehclw45FIg39vyTMbe4dojY52sFpR+3Ev+V8pAa8fekijwf6zSUAz 64EqbJu8P+mW4oIPK6dneObh3H71TSgbLKL/vTRW9Z0nRbSnaGPEBA0G7EOhqyr/bPZMV5 mcHpEBxREvjUYr5jKBFnOkbZwJ0DqQk= DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1733088180; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=4S7W+NNw/Z0asEvav/qetVxn0P5gIx5aAGF1EDJoEHQ=; b=FQjhHuimqHQ+o7LOE10mFJFDyutRc9nKBzpYWg1SA9pgCmeuDvsbVpPqVvf+4wsZhzNGsN 37gyfwhQPdo2Hc2UZFyk5x8r3hmEKG92XULebkEt2wF3UxhytJBS7fEH6+0pTpfoyz2rOS qJEqNxikB5iPsmPT5iobFUBAkqfD1yM= Received: from mail-qt1-f199.google.com (mail-qt1-f199.google.com [209.85.160.199]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-403-glk-lk0FMNO1y7BuGPKt7w-1; Sun, 01 Dec 2024 16:22:59 -0500 X-MC-Unique: glk-lk0FMNO1y7BuGPKt7w-1 X-Mimecast-MFC-AGG-ID: glk-lk0FMNO1y7BuGPKt7w Received: by mail-qt1-f199.google.com with SMTP id d75a77b69052e-466c88a95e5so48598521cf.0 for ; Sun, 01 Dec 2024 13:22:59 -0800 (PST) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1733088179; x=1733692979; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=4S7W+NNw/Z0asEvav/qetVxn0P5gIx5aAGF1EDJoEHQ=; b=SkxP6lbmNigdM04psYkjDc38ezRcd3bL6l4t+WFmDOtWGFlqRoRonRED6BmzwiZENE +6REl5ew95fH3MOkSzREPPsn0ZiX35nf8HsIioiM4czSQFs7g3ag86/sImIlvLzMBmMj 7/mXFFlQB1aH7dljRiDfwHG8w7kIWa+3v0iXMUitUagTSczrcE+HAZwCQB60U8GnG9R2 Q8d7L88VP+0uppWuZx1YaPFybnmv15J6OGXAEE7gFmQx/lxWTI1+WAf6u0LPlSiWgPqf gDuslQUc5q2NtphtHZru/JAoTHaowXRd1JV5xsf9qe4DaXjFh5nn3jLPrjkoIp/bj91G beow== X-Forwarded-Encrypted: i=1; AJvYcCXXeT15mkNtlPQqLH2w8iq6vM0wNJjMkEmAxJdgF2DLx5x9JY2ULf2V8+/GjxY9xRDL0zZZRYBJ5g==@kvack.org X-Gm-Message-State: AOJu0YwYwtxFAl3YaPbg4YYba0SP8hATvKj1u/TWTFyiFtlborlliCx8 Z9P6C5uS6ERsKL3XwJ8IJUlnfwgou2EaSVzoBrVZfKDVYEJi8JEtARgNjoLsE56bpNGEOraevuG 1JkDDvtG+cFvk8HRbYNntBSe18jeXVZst1xOLK5GBYgfxLk2tf3GkKy57 X-Gm-Gg: ASbGnctXjz6SfKYpkzHLkaEr4c1DONvpIysb03UAgRpIpcDjd6B+IUe/tYnHZlyRVM9 4THsVdplFY/9ctH4YdGj0PT71q8mcgg8vaLFpnYgx85kK2Cu9qt57NnXCMZLDWqWfTDlMlO58fa kXQYo7dnLOaqRrBjb4fhSNWskXG1qZrISKtPFvs/dKopjbCk54AK0NKX9ziymHIaqob7jfgVlDs 4igWCDWBRO3hs4Um2iroXU6Th1SlyaH+Uv9F5hKxk8V8SvnsY8vE9AjjI1zcXNa+rNDnMq4bPFX IlChw9rYYW35+H1ZGawz6JD2LQ== X-Received: by 2002:a05:622a:558f:b0:463:60a9:74c0 with SMTP id d75a77b69052e-466b34df5a2mr287104951cf.14.1733088178946; Sun, 01 Dec 2024 13:22:58 -0800 (PST) X-Google-Smtp-Source: AGHT+IEU8yFbm4tc7hT2oRa608rksSZP3tFjGJoJtWKVGNQdDCFCtb+uupJvQXL6XiGSzbeqJN01Ew== X-Received: by 2002:a05:622a:558f:b0:463:60a9:74c0 with SMTP id d75a77b69052e-466b34df5a2mr287104571cf.14.1733088178467; Sun, 01 Dec 2024 13:22:58 -0800 (PST) Received: from x1n.redhat.com (pool-99-254-114-190.cpe.net.cable.rogers.com. [99.254.114.190]) by smtp.gmail.com with ESMTPSA id d75a77b69052e-466c4249f0asm41278911cf.81.2024.12.01.13.22.56 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sun, 01 Dec 2024 13:22:57 -0800 (PST) From: Peter Xu To: linux-kernel@vger.kernel.org, linux-mm@kvack.org Cc: Rik van Riel , Breno Leitao , Andrew Morton , peterx@redhat.com, Muchun Song , Oscar Salvador , Roman Gushchin , Naoya Horiguchi , Ackerley Tng Subject: [PATCH 5/7] mm/hugetlb: Simplify vma_has_reserves() Date: Sun, 1 Dec 2024 16:22:38 -0500 Message-ID: <20241201212240.533824-6-peterx@redhat.com> X-Mailer: git-send-email 2.47.0 In-Reply-To: <20241201212240.533824-1-peterx@redhat.com> References: <20241201212240.533824-1-peterx@redhat.com> MIME-Version: 1.0 X-Mimecast-Spam-Score: 0 X-Mimecast-MFC-PROC-ID: qU9ZAbXpNm9HzeW_P7oeuFrFvqRd03t-QVkuNWTC1gU_1733088179 X-Mimecast-Originator: redhat.com content-type: text/plain; charset="US-ASCII"; x-default=true X-Rspamd-Server: rspam05 X-Stat-Signature: jcfag5kk4ggysmzgb1rhrdrcgfqm77nz X-Rspamd-Queue-Id: 245154000D X-Rspam-User: X-HE-Tag: 1733088166-239623 X-HE-Meta: U2FsdGVkX1/YEPiAzpG5o2PJbO+5p1Gx13io4YMtpmAMXiUerfHg8Vs6hARQ/aGWFDyIMLLh+khmxrmfwM+pAkewRZpczG9tLsQq03EGcuFd/EUDjb/NNg3fO0Ro2MGlDpyZdLEk4fbBNo7jGqQfeg/lQ9v98J47h5zJzEAG4q2M0+UGo1fLlcpWl/7wYfA90CuL9GGp8tm78HMHUDIGexDlw3WM/ewTpgzNrDjBEIDvL06GBNxMZfJkvYxD/+sWVlarqL/zYHJwofmRhv+a4D7RuuSIMongLuaGEOvREhrt+wJRVGai3cCu6o4tLEdS1peAdcdh195HfBNa3ARJsY6vtCsjxG/6kCgjFDCE40Tq6ewCRMpaD/MR0gfwfTVm2T8CUmdJgzAvEqg+2dBccjqfUiS/CqmyVznjOsIYlLTzJSgUXPyz0Zdb+zRRL+mskyeP4yKevJzdRqxQcllqhwS92bIlpTxLl69NCiRFeNX6aSyCC7y4z60NTwvbD0ns4zAlQZjRseD0jMnxsXK27Slf2up5hn3THi7UxcF5mRlspIUI23vaBW0uRwgxb1z80fg5Gd7COFleR4LYEfaHW3ibchA13KoNrv11dcOx05BpMuSKMRWWMJLV4WwJ/cELNpk/MC+PA32Dx93t5JxowPruDztoHFkKW9vvHlD59UEgakJkbdtnCVAAzosNJ6bTkkvz35RseQbPlUhRenpH9CSO0oYEu5xkmoKKUpVNzdAdjOE8SvSUBX7eADQoDWMcOiYS9sVpIxHuS+3cCU5NvE4hE5xVrIkGHvmNHXq9ynL8o7LHh5bjXKc2KxH/MnRpOlxQ350RvuyHbmnl4jKpcyikZDXVmTDrsMk1QY2SBllKqgnVhRfQ/qAlEQ1xGHf2gskofBh+xab6AxMJddxwvWPkJO6QEKy2d46c0xP1pOyXLghxL+2z8igv9uGAkt8BKPT0hmp1Qt/sPGKdGci ueX7hSO3 lU9giA7IyDB7UKPdA/Dp44onv6EqdCtLF/5nQB2Etz4BoPp/VXwOSDwEV/5GTul9fGM42kpHiDsW4geg9pJY3MHWY1KptkijXEEcY/NO2Wq3eKuOPByD1JxfTsgLHgD7Cckjj0ue/WbNBZp0FtkwzjguSlHQY3HPoquBx+Lvo8gsEWvjYoMOckhuXnMtAYmkeZ4sXlDPYnInjS8D37LSiIAyTZg4tVsXm6m1e8D+yE9Brj5pwKSS0sdYHh5hKMscdUJdsYdNRb30pQ3TcidmJ5LnWkIqAZ1Hia2xY6py+BcblfOxAs0eIxM3VsYnsFr8NsUO+xHZRHlBleJp+7pUEzbZCDnEPjq+pnmXac+oy5GBDvZeGSO12EijpjuD7iIyiYg3SbNmmt8hR/MxrDt0B2pdtehD5HngeDG8H5IrqpbsBYMENvtqxrbZnAI+GvdjNAnGm3NVx61lANGXIexVCNdId9nffoc9ULLk6DNEOT3qIkqA= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000001, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: vma_has_reserves() is a helper "trying" to know whether the vma should consume one reservation when allocating the hugetlb folio. However it's not clear on why we need such complexity, as such information is already represented in the "chg" variable. From alloc_hugetlb_folio() context, "chg" (or in the function's context, "gbl_chg") is defined as: - If gbl_chg=1, the allocation cannot reuse an existing reservation - If gbl_chg=0, the allocation should reuse an existing reservation Firstly, map_chg is defined as following, to cover all cases of hugetlb reservation scenarios (mostly, via vma_needs_reservation(), but cow_from_owner is an outlier): CONDITION HAS RESERVATION? ========= ================ - SHARED: always check against per-inode resv_map (ignore NONRESERVE) - If resv exists ==> YES [1] - If not ==> NO [2] - PRIVATE: complicated... - Request came from a CoW from owner resv map ==> NO [3] (when cow_from_owner==true) - If does not own a resv_map at all.. ==> NO [4] (examples: VM_NORESERVE, private fork()) - If owns a resv_map, but resv donsn't exists ==> NO [5] - If owns a resv_map, and resv exists ==> YES [6] Further on, gbl_chg considered spool setup, so that is a decision based on all the context. If we look at vma_has_reserves(), it almost does check that has already been processed by map_chg accounting (I marked each return value to the case above): static bool vma_has_reserves(struct vm_area_struct *vma, long chg) { if (vma->vm_flags & VM_NORESERVE) { if (vma->vm_flags & VM_MAYSHARE && chg == 0) return true; ==> [1] else return false; ==> [2] or [4] } if (vma->vm_flags & VM_MAYSHARE) { if (chg) return false; ==> [2] else return true; ==> [1] } if (is_vma_resv_set(vma, HPAGE_RESV_OWNER)) { if (chg) return false; ==> [5] else return true; ==> [6] } return false; ==> [4] } It didn't check [3], but [3] case was actually already covered now by the "chg" / "gbl_chg" / "map_chg" calculations. In short, vma_has_reserves() doesn't provide anything more than return "!chg".. so just simplify all the things. There're a lot of comments describing truncation races, IIUC there should have no race as long as map_chg is properly done. Signed-off-by: Peter Xu --- mm/hugetlb.c | 67 ++++++---------------------------------------------- 1 file changed, 7 insertions(+), 60 deletions(-) diff --git a/mm/hugetlb.c b/mm/hugetlb.c index 14cfe0bb01e4..b7e16b3c4e67 100644 --- a/mm/hugetlb.c +++ b/mm/hugetlb.c @@ -1247,66 +1247,13 @@ void clear_vma_resv_huge_pages(struct vm_area_struct *vma) } /* Returns true if the VMA has associated reserve pages */ -static bool vma_has_reserves(struct vm_area_struct *vma, long chg) +static bool vma_has_reserves(long chg) { - if (vma->vm_flags & VM_NORESERVE) { - /* - * This address is already reserved by other process(chg == 0), - * so, we should decrement reserved count. Without decrementing, - * reserve count remains after releasing inode, because this - * allocated page will go into page cache and is regarded as - * coming from reserved pool in releasing step. Currently, we - * don't have any other solution to deal with this situation - * properly, so add work-around here. - */ - if (vma->vm_flags & VM_MAYSHARE && chg == 0) - return true; - else - return false; - } - - /* Shared mappings always use reserves */ - if (vma->vm_flags & VM_MAYSHARE) { - /* - * We know VM_NORESERVE is not set. Therefore, there SHOULD - * be a region map for all pages. The only situation where - * there is no region map is if a hole was punched via - * fallocate. In this case, there really are no reserves to - * use. This situation is indicated if chg != 0. - */ - if (chg) - return false; - else - return true; - } - /* - * Only the process that called mmap() has reserves for - * private mappings. + * Now "chg" has all the conditions considered for whether we + * should use an existing reservation. */ - if (is_vma_resv_set(vma, HPAGE_RESV_OWNER)) { - /* - * Like the shared case above, a hole punch or truncate - * could have been performed on the private mapping. - * Examine the value of chg to determine if reserves - * actually exist or were previously consumed. - * Very Subtle - The value of chg comes from a previous - * call to vma_needs_reserves(). The reserve map for - * private mappings has different (opposite) semantics - * than that of shared mappings. vma_needs_reserves() - * has already taken this difference in semantics into - * account. Therefore, the meaning of chg is the same - * as in the shared case above. Code could easily be - * combined, but keeping it separate draws attention to - * subtle differences. - */ - if (chg) - return false; - else - return true; - } - - return false; + return chg == 0; } static void enqueue_hugetlb_folio(struct hstate *h, struct folio *folio) @@ -1407,7 +1354,7 @@ static struct folio *dequeue_hugetlb_folio_vma(struct hstate *h, * have no page reserves. This check ensures that reservations are * not "stolen". The child may still get SIGKILLed */ - if (!vma_has_reserves(vma, chg) && !available_huge_pages(h)) + if (!vma_has_reserves(chg) && !available_huge_pages(h)) goto err; gfp_mask = htlb_alloc_mask(h); @@ -1425,7 +1372,7 @@ static struct folio *dequeue_hugetlb_folio_vma(struct hstate *h, folio = dequeue_hugetlb_folio_nodemask(h, gfp_mask, nid, nodemask); - if (folio && vma_has_reserves(vma, chg)) { + if (folio && vma_has_reserves(chg)) { folio_set_hugetlb_restore_reserve(folio); h->resv_huge_pages--; } @@ -3076,7 +3023,7 @@ struct folio *alloc_hugetlb_folio(struct vm_area_struct *vma, if (!folio) goto out_uncharge_cgroup; spin_lock_irq(&hugetlb_lock); - if (vma_has_reserves(vma, gbl_chg)) { + if (vma_has_reserves(gbl_chg)) { folio_set_hugetlb_restore_reserve(folio); h->resv_huge_pages--; }