From patchwork Tue Sep 10 23:43:38 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ackerley Tng X-Patchwork-Id: 13799480 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 6E847EE01F2 for ; Tue, 10 Sep 2024 23:44:56 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 6E13C8D00D2; Tue, 10 Sep 2024 19:44:48 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 66A718D0002; Tue, 10 Sep 2024 19:44:48 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 4455B8D00D2; Tue, 10 Sep 2024 19:44:48 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0011.hostedemail.com [216.40.44.11]) by kanga.kvack.org (Postfix) with ESMTP id 1FCC78D0002 for ; Tue, 10 Sep 2024 19:44:48 -0400 (EDT) Received: from smtpin17.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay01.hostedemail.com (Postfix) with ESMTP id C2DF91C4D7D for ; Tue, 10 Sep 2024 23:44:47 +0000 (UTC) X-FDA: 82550461014.17.CF760DC Received: from mail-pl1-f202.google.com (mail-pl1-f202.google.com [209.85.214.202]) by imf10.hostedemail.com (Postfix) with ESMTP id EF765C000A for ; Tue, 10 Sep 2024 23:44:45 +0000 (UTC) Authentication-Results: imf10.hostedemail.com; dkim=pass header.d=google.com header.s=20230601 header.b=CCtSGZtp; dmarc=pass (policy=reject) header.from=google.com; spf=pass (imf10.hostedemail.com: domain of 37NngZgsKCGE9BJDQKDXSMFFNNFKD.BNLKHMTW-LLJU9BJ.NQF@flex--ackerleytng.bounces.google.com designates 209.85.214.202 as permitted sender) smtp.mailfrom=37NngZgsKCGE9BJDQKDXSMFFNNFKD.BNLKHMTW-LLJU9BJ.NQF@flex--ackerleytng.bounces.google.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1726011882; a=rsa-sha256; cv=none; b=aR+BcEhlfIjSAXrsShWocCdXiz0X+zsWsYEXBqnCjmrK+MLLlkYr5eBZsB1/A1NrQSVrrH JmNcWBy3RhJ6EEX2RLDdOtD+zb9EUq8uENBIwY+2cnuoaTc9sAVYF961ZyeGxMxzkL0thc NtOE0DH0360Y3/mkGHdPj4SvWjPjReQ= ARC-Authentication-Results: i=1; imf10.hostedemail.com; dkim=pass header.d=google.com header.s=20230601 header.b=CCtSGZtp; dmarc=pass (policy=reject) header.from=google.com; spf=pass (imf10.hostedemail.com: domain of 37NngZgsKCGE9BJDQKDXSMFFNNFKD.BNLKHMTW-LLJU9BJ.NQF@flex--ackerleytng.bounces.google.com designates 209.85.214.202 as permitted sender) smtp.mailfrom=37NngZgsKCGE9BJDQKDXSMFFNNFKD.BNLKHMTW-LLJU9BJ.NQF@flex--ackerleytng.bounces.google.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1726011882; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=5f8yQFUffYPzUq824ZihetLpeNkj57gNWJVxt7QoH4Y=; b=QVZquwXVtdq8sEKC1ELLZzIJRzxDvNVcoBXMBc7AOBcOORXLFwa22q2p6IoE4QuGT9C8+h a3Dlb0SmyBaHmjOUX9zwgdsBlbhghMS8dypeM3OqrszmnWXzdYXgNK7GrrVv/p5xI4DqU0 +pk9Q6bTxDdif/Zg9WP1HSW9v81Vo1c= Received: by mail-pl1-f202.google.com with SMTP id d9443c01a7336-207464a9b59so16641375ad.3 for ; Tue, 10 Sep 2024 16:44:45 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1726011885; x=1726616685; darn=kvack.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=5f8yQFUffYPzUq824ZihetLpeNkj57gNWJVxt7QoH4Y=; b=CCtSGZtp+Nkt0VNeIE8kbNVkUheHtMP8Fhiu1+72BiGFmftwlKmnKFNm1WAJIJifMo sm6Tb4L5lkEs60gOxH2pOysNtTfNGL+RyfXDhM6Hbez5s+Hj5QPGsxPy1nmx9N32pYPY mQhf7BFKwgazbaLO33ooiMkv+v3xWHUOFuF2wDLL9F7JhzYS3axpa4//gKrA/oqfBeqe 9ZqGzw9DgDFG08o6+oTjHNqXAM1Gg1BvQ9/wq/WJWkGGA4PeutbKeed58N9mzVyXOsqR pSsgsQpL5xNaL/JYmvtBAoHh1jRx0rWCxGnEV4wvOkNShQeDuNoKZ0TdkdngM1ADLiUA R5Xg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1726011885; x=1726616685; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=5f8yQFUffYPzUq824ZihetLpeNkj57gNWJVxt7QoH4Y=; b=cP2bhJ6cqM6BiA7hHPD7TWIa8HzK2OXygD7p8j4k5h2PHh8K9ERC59TiCX/ptJoPm2 9swKLPBg6bISzBP1L+sq9drDk5FFt9L/VX5C2DQhK7bcqPWZnTv/1zsANR3b1/FWRbaJ qKfQTMu/9RHmjIcp3zFTxC5yXa86oio00JOe03VP3NrssOwiWTO8vQh0gZK6ycRCjX0S MO6c3IaWAD2I4krbVlEWujH0Tfw95j4nIaW6eZidOyVSsjE/Me3101X8g2r17ziV14yM s2ot/1AWyMZ/4kh6d4Xp+bxRSdX9RwgRyP/4p7Hcp51SLtANlvqTKe9jsNoxhLA5OyEK KboA== X-Forwarded-Encrypted: i=1; AJvYcCXzfzEZHPECoaf8L3cN386CgtFzqXB9T3o15y9Mm02ji+M2rHePvUG0UEBRUVVu+ILvi85YRCPkoQ==@kvack.org X-Gm-Message-State: AOJu0YynBhzblVI3BIEldh1uvPYLTOOM4aHed4kTOfGvQLki/NpyFscx l3BBoy0LwtoB8sD34EJjXBu5C41uo/sZWMoAqiy5sb9D2a9xQpbUXrZVQW7ctYK6GfGHBt1E7yC 8fWtZjAwHXlymnosjj33n4Q== X-Google-Smtp-Source: AGHT+IEMrn0OqAypgUQC+spX1CPlDQJLfBB8w26Ns2KNptOMPv9ijUEGA9m5hBvK895d2wE0z6f+lmJkIjqr2SJ6OA== X-Received: from ackerleytng-ctop.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:13f8]) (user=ackerleytng job=sendgmr) by 2002:a17:903:1ca:b0:206:aa07:b62 with SMTP id d9443c01a7336-2074c5f2a6emr2330595ad.5.1726011884521; Tue, 10 Sep 2024 16:44:44 -0700 (PDT) Date: Tue, 10 Sep 2024 23:43:38 +0000 In-Reply-To: Mime-Version: 1.0 References: X-Mailer: git-send-email 2.46.0.598.g6f2099f65c-goog Message-ID: <7348091f4c539ed207d9bb0f3744d0f0efb7f2b3.1726009989.git.ackerleytng@google.com> Subject: [RFC PATCH 07/39] mm: hugetlb: Refactor out hugetlb_alloc_folio From: Ackerley Tng To: tabba@google.com, quic_eberman@quicinc.com, roypat@amazon.co.uk, jgg@nvidia.com, peterx@redhat.com, david@redhat.com, rientjes@google.com, fvdl@google.com, jthoughton@google.com, seanjc@google.com, pbonzini@redhat.com, zhiquan1.li@intel.com, fan.du@intel.com, jun.miao@intel.com, isaku.yamahata@intel.com, muchun.song@linux.dev, mike.kravetz@oracle.com Cc: erdemaktas@google.com, vannapurve@google.com, ackerleytng@google.com, qperret@google.com, jhubbard@nvidia.com, willy@infradead.org, shuah@kernel.org, brauner@kernel.org, bfoster@redhat.com, kent.overstreet@linux.dev, pvorel@suse.cz, rppt@kernel.org, richard.weiyang@gmail.com, anup@brainfault.org, haibo1.xu@intel.com, ajones@ventanamicro.com, vkuznets@redhat.com, maciej.wieczor-retman@intel.com, pgonda@google.com, oliver.upton@linux.dev, linux-kernel@vger.kernel.org, linux-mm@kvack.org, kvm@vger.kernel.org, linux-kselftest@vger.kernel.org, linux-fsdevel@kvack.org X-Rspam-User: X-Stat-Signature: qn5ujrpt4mq84zjgtnf8txktyp7hg7mo X-Rspamd-Queue-Id: EF765C000A X-Rspamd-Server: rspam02 X-HE-Tag: 1726011885-24409 X-HE-Meta: U2FsdGVkX19XKeWNIrwGpdXMb+ltLD0+Hl9brIAuicu9XZRae0IvEqcm1k0xfoFgxP93zwGAwVyAgZJ4ETakAUrIIR3vs3AglNZ25IO3Gd7t0Vz3pjLpMEkDF6/zh80IY8SWxt5A++BLDasoMtnySnGU3ldXyk4S6CTDwp0CIoWUpPnQtBTf361o0nAYjzADpNVp3HN3q6B6RNXGn++W06a9eqk/XtdoGCu2nyVlWPyygjw07Rzt8U5OnuSoYzysh+LACTUVDKcRZuP4XkIlVjyZd1qu24oCasHOYJ2j4lAMb1eTaFYSM8FENEqJRCagHe4YBymDoKuJuhC5PKv+LZgkrpWyQe1AUb4f1hs35VWVcbFmOkkc9Q5Gd/IJApiC+WMyylrkpoQnFtYtjTp7I0hdjsZo4fOQrWXv2YtE3gto6wbI9ueJvaUihS1f2MYWhViJ56heHUF578+XnHvcZwfWJW4b5hmsjE14XXXw/XJwIEkhA1wuUxuIhkI8rBboIdxQYV/X9+AWpOrDdnyz07gZYfNRY8kieyoWE2ZdRBR7pqTjO0zwMGL8U6ttjJG4E/LyL0c2fkxyXiREDoW93uDQzPMT7h45NSDY0f7dGj5AbgsP7M0Z65YSWTTK3jazUHT48U3x45CtizNP7edJtsE5Uv3fWqhS75sam6rL5hdoDEjrj5ZB4//KpVvsw1fyvoXVv7qsRTA2x+EXu5E21Rdv+dvbWad+OOjiZZpofFcMz1roykJ4ORmwfrzo51YKgI4Lfk7rsEF0YbnDGWISU6LisMzjaWiHg8+YDWNc3j4OBmO5D1x9jF3JS1/O7+iuEZsMrs+tyhiFdwA9MOBATHsrsNu75IJOEWXu3sn1xLeZg0gpihEYgtfN2G1sjYACU3zQgK2QiPsU63kw/JYg7oaVm93XENb4eMnUNTgsA4MJOx4LdjkZ9eBa+AZT3xELavTWGiEdn8EZnG4xNUm EgN1AeHb oDg68oUbu0cgZLwon/2By9Eq0gxof5Q6mYUCXEfinONxdUo7vXvPT9nf2fnRGJNdusInL1jMKUW1BDHrsCQGsBI669EviTBSDFx/63I3qQmx7kuiFEB2tCmNyubzE95hZdtDNsN1dXhn3uAklAOaGagKqs3wvRV/UF9R9R9IEjY4Q96ssCHXwgbUtoMUwH34k9QfUzEpHCDzB1E7Dr4AobS6uoN7BAbYoSn3ynIsRz/3zcaUpOhel0YLVX/JIl9MZ8zfLmbGMsgRC3YVp+eW8wpf+m7zvZWV7Paj+YudGZ0Xc2xvEzxWaEGbBx+dav1CHHj+qZHsrd7xOtHw8iH987+zxOWs68h57cj4F/3HOcoS+56TTzvCsgeB4sPPFpL31T4ztzlL0hRTxwzvxD7wxVK8VPRS4N02YLV2lP5U9EdhfJxd1PlvXDOKdLINW/j+LMIfll0TaTxmentOvKFP8Gcwoj+AJhWl4OamtPOjIzVYg4h98XT+7Hbd/T45P01ozPKgh3m63IhXSfxXniVuw39RRE54dskWQy91ZAUtg0c1bjvy1cxKoQDs1l/IzxtaQARmlHbNe6IPl+zsvof4fXPi+D8sfUrSRNlphCE/UwzadYDrLc97bLMqSE45boA6kHWuXuf4SIKx6BL25uuXxKYM3K7EPnjDHvsz+eqeENEcrHpKUBcYvt8jOoYu6xbcf82nasackbEk1NjY= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000117, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: hugetlb_alloc_folio() allocates a hugetlb folio without handling reservations in the vma and subpool, since some of that reservation concepts are hugetlbfs specific. Signed-off-by: Ackerley Tng --- include/linux/hugetlb.h | 12 ++++ mm/hugetlb.c | 144 ++++++++++++++++++++++++---------------- 2 files changed, 98 insertions(+), 58 deletions(-) diff --git a/include/linux/hugetlb.h b/include/linux/hugetlb.h index c9bf68c239a0..e4a05a421623 100644 --- a/include/linux/hugetlb.h +++ b/include/linux/hugetlb.h @@ -690,6 +690,10 @@ struct huge_bootmem_page { }; int isolate_or_dissolve_huge_page(struct page *page, struct list_head *list); +struct folio *hugetlb_alloc_folio(struct hstate *h, struct mempolicy *mpol, + int nid, nodemask_t *nodemask, + bool charge_cgroup_reservation, + bool use_hstate_resv); struct folio *alloc_hugetlb_folio(struct vm_area_struct *vma, unsigned long addr, int avoid_reserve); struct folio *alloc_hugetlb_folio_nodemask(struct hstate *h, int preferred_nid, @@ -1027,6 +1031,14 @@ static inline int isolate_or_dissolve_huge_page(struct page *page, return -ENOMEM; } +static inline struct folio * +hugetlb_alloc_folio(struct hstate *h, struct mempolicy *mpol, int nid, + nodemask_t *nodemask, bool charge_cgroup_reservation, + bool use_hstate_resv) +{ + return NULL; +} + static inline struct folio *alloc_hugetlb_folio(struct vm_area_struct *vma, unsigned long addr, int avoid_reserve) diff --git a/mm/hugetlb.c b/mm/hugetlb.c index e341bc0eb49a..7e73ebcc0f26 100644 --- a/mm/hugetlb.c +++ b/mm/hugetlb.c @@ -3106,6 +3106,75 @@ int isolate_or_dissolve_huge_page(struct page *page, struct list_head *list) return ret; } +/** + * Allocates a hugetlb folio either by dequeueing or from buddy allocator. + */ +struct folio *hugetlb_alloc_folio(struct hstate *h, struct mempolicy *mpol, + int nid, nodemask_t *nodemask, + bool charge_cgroup_reservation, + bool use_hstate_resv) +{ + struct hugetlb_cgroup *h_cg = NULL; + struct folio *folio; + int ret; + int idx; + + idx = hstate_index(h); + + if (charge_cgroup_reservation) { + ret = hugetlb_cgroup_charge_cgroup_rsvd( + idx, pages_per_huge_page(h), &h_cg); + if (ret) + return NULL; + } + + ret = hugetlb_cgroup_charge_cgroup(idx, pages_per_huge_page(h), &h_cg); + if (ret) + goto err_uncharge_cgroup_reservation; + + spin_lock_irq(&hugetlb_lock); + + folio = dequeue_hugetlb_folio(h, mpol, nid, nodemask, use_hstate_resv); + if (!folio) { + spin_unlock_irq(&hugetlb_lock); + + folio = alloc_buddy_hugetlb_folio_from_node(h, mpol, nid, nodemask); + if (!folio) + goto err_uncharge_cgroup; + + spin_lock_irq(&hugetlb_lock); + if (use_hstate_resv) { + folio_set_hugetlb_restore_reserve(folio); + h->resv_huge_pages--; + } + list_add(&folio->lru, &h->hugepage_activelist); + folio_ref_unfreeze(folio, 1); + /* Fall through */ + } + + hugetlb_cgroup_commit_charge(idx, pages_per_huge_page(h), h_cg, folio); + + if (charge_cgroup_reservation) { + hugetlb_cgroup_commit_charge_rsvd(idx, pages_per_huge_page(h), + h_cg, folio); + } + + spin_unlock_irq(&hugetlb_lock); + + return folio; + +err_uncharge_cgroup: + hugetlb_cgroup_uncharge_cgroup(idx, pages_per_huge_page(h), h_cg); + +err_uncharge_cgroup_reservation: + if (charge_cgroup_reservation) { + hugetlb_cgroup_uncharge_cgroup_rsvd(idx, pages_per_huge_page(h), + h_cg); + } + + return NULL; +} + struct folio *alloc_hugetlb_folio(struct vm_area_struct *vma, unsigned long addr, int avoid_reserve) { @@ -3114,11 +3183,10 @@ struct folio *alloc_hugetlb_folio(struct vm_area_struct *vma, struct folio *folio; long map_chg, map_commit, nr_pages = pages_per_huge_page(h); long gbl_chg; - int memcg_charge_ret, ret, idx; - struct hugetlb_cgroup *h_cg = NULL; + int memcg_charge_ret; struct mem_cgroup *memcg; - bool deferred_reserve; - gfp_t gfp = htlb_alloc_mask(h) | __GFP_RETRY_MAYFAIL; + bool charge_cgroup_reservation; + gfp_t gfp = htlb_alloc_mask(h); bool use_hstate_resv; struct mempolicy *mpol; nodemask_t *nodemask; @@ -3126,13 +3194,14 @@ struct folio *alloc_hugetlb_folio(struct vm_area_struct *vma, int nid; memcg = get_mem_cgroup_from_current(); - memcg_charge_ret = mem_cgroup_hugetlb_try_charge(memcg, gfp, nr_pages); + memcg_charge_ret = + mem_cgroup_hugetlb_try_charge(memcg, gfp | __GFP_RETRY_MAYFAIL, + nr_pages); if (memcg_charge_ret == -ENOMEM) { mem_cgroup_put(memcg); return ERR_PTR(-ENOMEM); } - idx = hstate_index(h); /* * Examine the region/reserve map to determine if the process * has a reservation for the page to be allocated. A return @@ -3160,57 +3229,22 @@ struct folio *alloc_hugetlb_folio(struct vm_area_struct *vma, } - /* If this allocation is not consuming a reservation, charge it now. - */ - deferred_reserve = map_chg || avoid_reserve; - if (deferred_reserve) { - ret = hugetlb_cgroup_charge_cgroup_rsvd( - idx, pages_per_huge_page(h), &h_cg); - if (ret) - goto out_subpool_put; - } - - ret = hugetlb_cgroup_charge_cgroup(idx, pages_per_huge_page(h), &h_cg); - if (ret) - goto out_uncharge_cgroup_reservation; - use_hstate_resv = should_use_hstate_resv(vma, gbl_chg, avoid_reserve); - spin_lock_irq(&hugetlb_lock); + /* + * charge_cgroup_reservation if this allocation is not consuming a + * reservation + */ + charge_cgroup_reservation = map_chg || avoid_reserve; mpol = get_vma_policy(vma, addr, hstate_vma(vma)->order, &ilx); - nid = policy_node_nodemask(mpol, htlb_alloc_mask(h), ilx, &nodemask); - folio = dequeue_hugetlb_folio(h, mpol, nid, nodemask, use_hstate_resv); - if (!folio) { - spin_unlock_irq(&hugetlb_lock); - - folio = alloc_buddy_hugetlb_folio_from_node(h, mpol, nid, nodemask); - if (!folio) { - mpol_cond_put(mpol); - goto out_uncharge_cgroup; - } - - spin_lock_irq(&hugetlb_lock); - if (use_hstate_resv) { - folio_set_hugetlb_restore_reserve(folio); - h->resv_huge_pages--; - } - list_add(&folio->lru, &h->hugepage_activelist); - folio_ref_unfreeze(folio, 1); - /* Fall through */ - } + nid = policy_node_nodemask(mpol, gfp, ilx, &nodemask); + folio = hugetlb_alloc_folio(h, mpol, nid, nodemask, + charge_cgroup_reservation, use_hstate_resv); mpol_cond_put(mpol); - hugetlb_cgroup_commit_charge(idx, pages_per_huge_page(h), h_cg, folio); - /* If allocation is not consuming a reservation, also store the - * hugetlb_cgroup pointer on the page. - */ - if (deferred_reserve) { - hugetlb_cgroup_commit_charge_rsvd(idx, pages_per_huge_page(h), - h_cg, folio); - } - - spin_unlock_irq(&hugetlb_lock); + if (!folio) + goto out_subpool_put; hugetlb_set_folio_subpool(folio, spool); @@ -3229,7 +3263,7 @@ struct folio *alloc_hugetlb_folio(struct vm_area_struct *vma, rsv_adjust = hugepage_subpool_put_pages(spool, 1); hugetlb_acct_memory(h, -rsv_adjust); - if (deferred_reserve) { + if (charge_cgroup_reservation) { spin_lock_irq(&hugetlb_lock); hugetlb_cgroup_uncharge_folio_rsvd(hstate_index(h), pages_per_huge_page(h), folio); @@ -3243,12 +3277,6 @@ struct folio *alloc_hugetlb_folio(struct vm_area_struct *vma, return folio; -out_uncharge_cgroup: - hugetlb_cgroup_uncharge_cgroup(idx, pages_per_huge_page(h), h_cg); -out_uncharge_cgroup_reservation: - if (deferred_reserve) - hugetlb_cgroup_uncharge_cgroup_rsvd(idx, pages_per_huge_page(h), - h_cg); out_subpool_put: if (map_chg || avoid_reserve) hugepage_subpool_put_pages(spool, 1);