From patchwork Wed May 15 13:09:31 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Yunsheng Lin X-Patchwork-Id: 13665238 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 1220BC25B79 for ; Wed, 15 May 2024 13:13:24 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 9ADAF6B03CE; Wed, 15 May 2024 09:13:23 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 95E676B03CF; Wed, 15 May 2024 09:13:23 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 7FD106B03D0; Wed, 15 May 2024 09:13:23 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0013.hostedemail.com [216.40.44.13]) by kanga.kvack.org (Postfix) with ESMTP id 5AE186B03CE for ; Wed, 15 May 2024 09:13:23 -0400 (EDT) Received: from smtpin21.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay02.hostedemail.com (Postfix) with ESMTP id E05AD1215B2 for ; Wed, 15 May 2024 13:13:22 +0000 (UTC) X-FDA: 82120671444.21.5DAFB05 Received: from szxga08-in.huawei.com (szxga08-in.huawei.com [45.249.212.255]) by imf20.hostedemail.com (Postfix) with ESMTP id DB3771C0002 for ; Wed, 15 May 2024 13:13:19 +0000 (UTC) Authentication-Results: imf20.hostedemail.com; dkim=none; dmarc=pass (policy=quarantine) header.from=huawei.com; spf=pass (imf20.hostedemail.com: domain of linyunsheng@huawei.com designates 45.249.212.255 as permitted sender) smtp.mailfrom=linyunsheng@huawei.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1715778801; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=e8JVzGX7r28oDsRIJu4zY37d40MTHzXup2u+Bf1qPT4=; b=TMqHhaZw0mhn+4orZUe6zCNAkpm3IYnGbID4iTGEgeo6y955yGo9Ay3gKW8vNnG9y8Rc7h NpqLrj5oe7porCggQhClrq+eZbGhcqmB1+Tn86tEuGxaG4QednWo8SF9Lep/f3IpbEChZw sfvaQVgi7T34V8pcW0cgz/55PONywbM= ARC-Authentication-Results: i=1; imf20.hostedemail.com; dkim=none; dmarc=pass (policy=quarantine) header.from=huawei.com; spf=pass (imf20.hostedemail.com: domain of linyunsheng@huawei.com designates 45.249.212.255 as permitted sender) smtp.mailfrom=linyunsheng@huawei.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1715778801; a=rsa-sha256; cv=none; b=woengOkn8HKz5FFA4WlRV6nOp73iTpfzwbn1wwXTnBJLSz0JXzQ5FZpCVJRfu6t3lvytMk m5AUq4u21DSwLsKjRs17YF4wW0wuEtcYVJzsKfO1Pcexo2JXmAnEt5TDe7PZCH1unFGVbB VMJzSJJlMWpdL1qBpT/NPlX4Kub0tG8= Received: from mail.maildlp.com (unknown [172.19.163.174]) by szxga08-in.huawei.com (SkyGuard) with ESMTP id 4VfYV36df1z1S5TV; Wed, 15 May 2024 21:09:47 +0800 (CST) Received: from dggpemm500005.china.huawei.com (unknown [7.185.36.74]) by mail.maildlp.com (Postfix) with ESMTPS id 00F91140157; Wed, 15 May 2024 21:13:17 +0800 (CST) Received: from localhost.localdomain (10.69.192.56) by dggpemm500005.china.huawei.com (7.185.36.74) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.1.2507.35; Wed, 15 May 2024 21:13:16 +0800 From: Yunsheng Lin To: , , CC: , , Yunsheng Lin , Alexander Duyck , Jonathan Corbet , Andrew Morton , , Subject: [RFC v4 12/13] mm: page_frag: update documentation for page_frag Date: Wed, 15 May 2024 21:09:31 +0800 Message-ID: <20240515130932.18842-13-linyunsheng@huawei.com> X-Mailer: git-send-email 2.33.0 In-Reply-To: <20240515130932.18842-1-linyunsheng@huawei.com> References: <20240515130932.18842-1-linyunsheng@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.69.192.56] X-ClientProxiedBy: dggems701-chm.china.huawei.com (10.3.19.178) To dggpemm500005.china.huawei.com (7.185.36.74) X-Rspamd-Queue-Id: DB3771C0002 X-Stat-Signature: krp75wkb8ysd6h6jx78zw9nhcxbgurpj X-Rspam-User: X-Rspamd-Server: rspam04 X-HE-Tag: 1715778799-22763 X-HE-Meta: U2FsdGVkX1/E//ohAIZa2kC2/Nr782lI1cWzkznxiOdm87i6dPFtGj3OgucOJ/o0kYi8oVqixEy3RMvtA5Zc8mVOgtVTDdt839odZZQUP9AXKDKBfeJScAFyg4pt4hIibymWDTTWNn04rKHilI8zW9+gOtLflitSrorAYo2uqrt0V3/r73jxWVER9nqoF0faYVjVOt/kXdcQ39ddYUzQThUpTpn74brl7J3s1Y3HICrrIeW6/zI3G/9MG9/PIUW8teFlXdlKmDSsC2sxh2cw9Pq+KakJIeWCHCwN6fM0j9JPPvyNMzWoH3A2A6bU7J+jWbmm+Q+CF8KwOJ8GjYK6891ZsswdUw3BC3SPbh2Bvn5N42ZWFIVLsEMYlnAi2GA5dirvxIMg7CYDQ6xHW9G5UINpkr9mLUTEqWljOa7D/Swzah+oHGL4HSRi11GZgWJRXU+6kukGtRoEFiUa6rfHB/Hcbxr+qFSaruWfIsZTTJ8Lv1dmiubG2vRbdjWx3IO186am6FYuBKcZASURSlAck1mG+wnrFIXRglGa5NGvZkorU69smGaXiga2PzGaZHXUN/uEagHtgH9iqx2Sn0VvKjINgRCENcXNcRVlAwrTPa6S126ELPmj3dfzeKUZPCzeNQ0jkobFy2yiJY3KHiRQFtwsYKy6v+l8dq/4Im2ASlHj0U5i586SDnLztylAn0CyiqrTNUv/LEnBfd1lr7W0EoCno9gbhVi5LdONSXZlfuy7FhS5usCn73ftGgpSshJJQdh8Yxe/kU5lChNjAHFNdaoZ0/nw3b/tKZkY2I4vjRJEIlur3nIWqGoK9bK7yyIdaErfiLat9DsTeKekM21hE1bMb/RlfFzrzyK2gXdSa1Z2/tCOHO/TFZjCdgNS7Wqef9325Gaf/ZB619uGQe0PJzuOY6JR7vmHRAxTwih3devqs7LOH0KVwVPMtmy5nXhoZs77rWqLHF0j4dbsVD5 9arp15PA PIOzNkA+MtwdpB5w46gG7XXyK3/M0gveYJbqM0pz1YrZaMcLOA4xUuhMtnJ0LvZu+BbgEsBAdMptNRCJ0qLVoxGiIjxlObRe3v+OS/zm4Jog8IsIQuKWUgV1T1e4rBEufq2h8t4ZdxTooWIEYdidFDFvpbshExlLhYe5CLBstEhVv8CtGOrF2qhf1ZS0qsqOYW7//3NrqBE8+73/LdzDAq1dJqcNK7Sj6rTDFVALfjgF/P20T3jv4KrDb+5JniGxlRv47gDQgmgYhLUKA1FBdsseQwqkiXE0qnos7 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: Update documentation about design, implementation and API usages for page_frag. CC: Alexander Duyck Signed-off-by: Yunsheng Lin --- Documentation/mm/page_frags.rst | 165 +++++++++++++++++++++++++++++++- include/linux/page_frag_cache.h | 108 +++++++++++++++++++++ mm/page_frag_cache.c | 65 ++++++++++++- 3 files changed, 335 insertions(+), 3 deletions(-) diff --git a/Documentation/mm/page_frags.rst b/Documentation/mm/page_frags.rst index 503ca6cdb804..cd66a54383c5 100644 --- a/Documentation/mm/page_frags.rst +++ b/Documentation/mm/page_frags.rst @@ -1,3 +1,5 @@ +.. SPDX-License-Identifier: GPL-2.0 + ============== Page fragments ============== @@ -40,4 +42,165 @@ page via a single call. The advantage to doing this is that it allows for cleaning up the multiple references that were added to a page in order to avoid calling get_page per allocation. -Alexander Duyck, Nov 29, 2016. + +Architecture overview +===================== + +.. code-block:: none + + +----------------------+ + | page_frag API caller | + +----------------------+ + ^ + | + | + | + v + +------------------------------------------------------------+ + | request page fragment | + +------------------------------------------------------------+ + ^ ^ ^ + | | | + | Cache not enough | + | | | + | v | + Cache empty +-----------------+ | + | | drain old cache | | + | +-----------------+ | + | ^ | + |_________________________________| | + ^ | + | | + _________________v_______________ | + | | Cache is enough + | | | +PAGE_SIZE < PAGE_FRAG_CACHE_MAX_SIZE | | + | | | + | PAGE_SIZE >= PAGE_FRAG_CACHE_MAX_SIZE | + | | | + v | | + +----------------------------------+ | | + | refill cache with order > 0 page | | | + +----------------------------------+ | | + ^ ^ | | + | | | | + | Refill failed | | + | | | | + | v v | + | +----------------------------------+ | + | | refill cache with order 0 page | | + | +----------------------------------+ | + | ^ | +Refill succeed | | + | Refill succeed | + | | | + v v v + +------------------------------------------------------------+ + | allocate fragment from cache | + +------------------------------------------------------------+ + +API interface +============= +As the design and implementation of page_frag API implies, the allocation side +does not allow concurrent calling. Instead it is assumed that the caller must +ensure there is not concurrent alloc calling to the same page_frag_cache +instance by using its own lock or rely on some lockless guarantee like NAPI +softirq. + +Depending on different aligning requirement, the page_frag API caller may call +page_frag_alloc*_align*() to ensure the returned virtual address or offset of +the page is aligned according to the 'align/alignment' parameter. Note the size +of the allocated fragment is not aligned, the caller needs to provide an aligned +fragsz if there is an alignment requirement for the size of the fragment. + +Depending on different use cases, callers expecting to deal with va, page or +both va and page for them may call page_frag_alloc_va*, page_frag_alloc_pg*, +or page_frag_alloc* API accordingly. + +There is also a use case that needs minimum memory in order for forward progress, +but more performant if more memory is available. Using page_frag_alloc_prepare() +and page_frag_alloc_commit() related API, the caller requests the minimum memory +it needs and the prepare API will return the maximum size of the fragment +returned. The caller needs to either call the commit API to report how much +memory it actually uses, or not do so if deciding to not use any memory. + +.. kernel-doc:: include/linux/page_frag_cache.h + :identifiers: page_frag_cache_init page_frag_cache_is_pfmemalloc + page_frag_cache_page_offset page_frag_alloc_va + page_frag_alloc_va_align page_frag_alloc_va_prepare_align + page_frag_alloc_probe page_frag_alloc_commit + page_frag_alloc_commit_noref page_frag_alloc_abort + +.. kernel-doc:: mm/page_frag_cache.c + :identifiers: __page_frag_alloc_va_align page_frag_alloc_va_prepare + page_frag_alloc_pg_prepare page_frag_alloc_prepare + page_frag_cache_drain page_frag_free_va + +Coding examples +=============== + +Init & Drain API +---------------- + +.. code-block:: c + + page_frag_cache_init(pfrag); + ... + page_frag_cache_drain(pfrag); + + +Alloc & Free API +---------------- + +.. code-block:: c + + void *va; + + va = page_frag_alloc_va_align(pfrag, size, gfp, align); + if (!va) + goto do_error; + + err = do_something(va, size); + if (err) { + page_frag_free_va(va); + goto do_error; + } + +Prepare & Commit API +-------------------- + +.. code-block:: c + + unsigned int copy, offset, size; + bool merge = true; + struct page *page; + void *va; + + size = 32U; + page = page_frag_alloc_prepare(pfrag, &offset, &size, &va); + if (!page) + goto wait_for_space; + + copy = min_t(unsigned int, copy, size); + if (!skb_can_coalesce(skb, i, page, offset)) { + if (i >= max_skb_frags) + goto new_segment; + + merge = false; + } + + copy = mem_schedule(copy); + if (!copy) + goto wait_for_space; + + err = copy_from_iter_full_nocache(va, copy, iter); + if (err) + goto do_error; + + if (merge) { + skb_frag_size_add(&skb_shinfo(skb)->frags[i - 1], copy); + page_frag_alloc_commit_noref(pfrag, offset, copy); + } else { + skb_fill_page_desc(skb, i, page, offset, copy); + page_frag_alloc_commit(pfrag, offset, copy); + } diff --git a/include/linux/page_frag_cache.h b/include/linux/page_frag_cache.h index 50466f5b71ea..0a1594cb6512 100644 --- a/include/linux/page_frag_cache.h +++ b/include/linux/page_frag_cache.h @@ -61,11 +61,28 @@ struct page_frag_cache { #endif }; +/** + * page_frag_cache_init() - Init page_frag cache. + * @nc: page_frag cache from which to init + * + * Inline helper to init the page_frag cache. + */ static inline void page_frag_cache_init(struct page_frag_cache *nc) { memset(nc, 0, sizeof(*nc)); } +/** + * page_frag_cache_is_pfmemalloc() - Check for pfmemalloc. + * @nc: page_frag cache from which to check + * + * Used to check if the current page in page_frag cache is pfmemalloc'ed. + * It has the same calling context expection as the alloc API. + * + * Return: + * true if the current page in page_frag cache is pfmemalloc'ed, otherwise + * return false. + */ static inline bool page_frag_cache_is_pfmemalloc(struct page_frag_cache *nc) { return encoded_page_pfmemalloc(nc->encoded_va); @@ -92,6 +109,19 @@ void *__page_frag_alloc_va_align(struct page_frag_cache *nc, unsigned int fragsz, gfp_t gfp_mask, unsigned int align_mask); +/** + * page_frag_alloc_va_align() - Alloc a page fragment with aligning requirement. + * @nc: page_frag cache from which to allocate + * @fragsz: the requested fragment size + * @gfp_mask: the allocation gfp to use when cache needs to be refilled + * @align: the requested aligning requirement for virtual address of fragment + * + * WARN_ON_ONCE() checking for @align before allocing a page fragment from + * page_frag cache with aligning requirement. + * + * Return: + * virtual address of the page fragment, otherwise return NULL. + */ static inline void *page_frag_alloc_va_align(struct page_frag_cache *nc, unsigned int fragsz, gfp_t gfp_mask, unsigned int align) @@ -100,11 +130,32 @@ static inline void *page_frag_alloc_va_align(struct page_frag_cache *nc, return __page_frag_alloc_va_align(nc, fragsz, gfp_mask, -align); } +/** + * page_frag_cache_page_offset() - Return the current page fragment's offset. + * @nc: page_frag cache from which to check + * + * The API is only used in net/sched/em_meta.c for historical reason, do not use + * it for new caller unless there is a strong reason. + * + * Return: + * the offset of the current page fragment in the page_frag cache. + */ static inline unsigned int page_frag_cache_page_offset(const struct page_frag_cache *nc) { return __page_frag_cache_page_offset(nc->encoded_va, nc->remaining); } +/** + * page_frag_alloc_va() - Alloc a page fragment. + * @nc: page_frag cache from which to allocate + * @fragsz: the requested fragment size + * @gfp_mask: the allocation gfp to use when cache need to be refilled + * + * Get a page fragment from page_frag cache. + * + * Return: + * virtual address of the page fragment, otherwise return NULL. + */ static inline void *page_frag_alloc_va(struct page_frag_cache *nc, unsigned int fragsz, gfp_t gfp_mask) { @@ -114,6 +165,21 @@ static inline void *page_frag_alloc_va(struct page_frag_cache *nc, void *page_frag_alloc_va_prepare(struct page_frag_cache *nc, unsigned int *fragsz, gfp_t gfp); +/** + * page_frag_alloc_va_prepare_align() - Prepare allocing a page fragment with + * aligning requirement. + * @nc: page_frag cache from which to prepare + * @fragsz: in as the requested size, out as the available size + * @gfp: the allocation gfp to use when cache need to be refilled + * @align: the requested aligning requirement + * + * WARN_ON_ONCE() checking for @align before preparing an aligned page fragment + * with minimum size of @fragsz, @fragsz is also used to report the maximum size + * of the page fragment the caller can use. + * + * Return: + * virtual address of the page fragment, otherwise return NULL. + */ static inline void *page_frag_alloc_va_prepare_align(struct page_frag_cache *nc, unsigned int *fragsz, gfp_t gfp, @@ -148,6 +214,21 @@ static inline struct encoded_va *__page_frag_alloc_probe(struct page_frag_cache return encoded_va; } +/** + * page_frag_alloc_probe - Probe the available page fragment. + * @nc: page_frag cache from which to probe + * @offset: out as the offset of the page fragment + * @fragsz: in as the requested size, out as the available size + * @va: out as the virtual address of the returned page fragment + * + * Probe the current available memory to caller without doing cache refilling. + * If no space is available in the page_frag cache, return NULL. + * If the requested space is available, up to @fragsz bytes may be added to the + * fragment using commit API. + * + * Return: + * the page fragment, otherwise return NULL. + */ #define page_frag_alloc_probe(nc, offset, fragsz, va) \ ({ \ struct page *__page = NULL; \ @@ -162,6 +243,14 @@ static inline struct encoded_va *__page_frag_alloc_probe(struct page_frag_cache __page; \ }) +/** + * page_frag_alloc_commit - Commit allocing a page fragment. + * @nc: page_frag cache from which to commit + * @fragsz: size of the page fragment has been used + * + * Commit the actual used size for the allocation that was either prepared or + * probed. + */ static inline void page_frag_alloc_commit(struct page_frag_cache *nc, unsigned int fragsz) { @@ -170,6 +259,16 @@ static inline void page_frag_alloc_commit(struct page_frag_cache *nc, nc->remaining -= fragsz; } +/** + * page_frag_alloc_commit_noref - Commit allocing a page fragment without taking + * page refcount. + * @nc: page_frag cache from which to commit + * @fragsz: size of the page fragment has been used + * + * Commit the alloc preparing or probing by passing the actual used size, but + * not taking refcount. Mostly used for fragmemt coalescing case when the + * current fragment can share the same refcount with previous fragment. + */ static inline void page_frag_alloc_commit_noref(struct page_frag_cache *nc, unsigned int fragsz) { @@ -177,6 +276,15 @@ static inline void page_frag_alloc_commit_noref(struct page_frag_cache *nc, nc->remaining -= fragsz; } +/** + * page_frag_alloc_abort - Abort the page fragment alloced using page_frag_alloc() + * related API back to the page_frag cache. + * @nc: page_frag cache to which the page fragment is aborted back + * @fragsz: size of the page fragment to be aborted + * + * It is expected to be called from the same context as the alloc API. + * Mostly used for error handling cases where the fragment is no longer needed. + */ static inline void page_frag_alloc_abort(struct page_frag_cache *nc, unsigned int fragsz) { diff --git a/mm/page_frag_cache.c b/mm/page_frag_cache.c index 273ccd715eae..a150ad895663 100644 --- a/mm/page_frag_cache.c +++ b/mm/page_frag_cache.c @@ -91,6 +91,18 @@ static struct page *page_frag_cache_refill(struct page_frag_cache *nc, return page; } +/** + * page_frag_alloc_va_prepare() - Prepare allocing a page fragment. + * @nc: page_frag cache from which to prepare + * @fragsz: in as the requested size, out as the available size + * @gfp: the allocation gfp to use when cache needs to be refilled + * + * Prepare a page fragment with minimum size of @fragsz, @fragsz is also used + * to report the maximum size of the page fragment the caller can use. + * + * Return: + * virtual address of the page fragment, otherwise return NULL. + */ void *page_frag_alloc_va_prepare(struct page_frag_cache *nc, unsigned int *fragsz, gfp_t gfp) { @@ -113,6 +125,19 @@ void *page_frag_alloc_va_prepare(struct page_frag_cache *nc, } EXPORT_SYMBOL(page_frag_alloc_va_prepare); +/** + * page_frag_alloc_pg_prepare - Prepare allocing a page fragment. + * @nc: page_frag cache from which to prepare + * @offset: out as the offset of the page fragment + * @fragsz: in as the requested size, out as the available size + * @gfp: the allocation gfp to use when cache needs to be refilled + * + * Prepare a page fragment with minimum size of @fragsz, @fragsz is also used + * to report the maximum size of the page fragment the caller can use. + * + * Return: + * the page fragment, otherwise return NULL. + */ struct page *page_frag_alloc_pg_prepare(struct page_frag_cache *nc, unsigned int *offset, unsigned int *fragsz, gfp_t gfp) @@ -143,6 +168,21 @@ struct page *page_frag_alloc_pg_prepare(struct page_frag_cache *nc, } EXPORT_SYMBOL(page_frag_alloc_pg_prepare); +/** + * page_frag_alloc_prepare - Prepare allocing a page fragment. + * @nc: page_frag cache from which to prepare + * @offset: out as the offset of the page fragment + * @fragsz: in as the requested size, out as the available size + * @va: out as the virtual address of the returned page fragment + * @gfp: the allocation gfp to use when cache needs to be refilled + * + * Prepare a page fragment with minimum size of @fragsz, @fragsz is also used + * to report the maximum size of the page fragment. Return both 'struct page' + * and virtual address of the fragment to the caller. + * + * Return: + * the page fragment, otherwise return NULL. + */ struct page *page_frag_alloc_prepare(struct page_frag_cache *nc, unsigned int *offset, unsigned int *fragsz, @@ -175,6 +215,10 @@ struct page *page_frag_alloc_prepare(struct page_frag_cache *nc, } EXPORT_SYMBOL(page_frag_alloc_prepare); +/** + * page_frag_cache_drain - Drain the current page from page_frag cache. + * @nc: page_frag cache from which to drain + */ void page_frag_cache_drain(struct page_frag_cache *nc) { if (!nc->encoded_va) @@ -195,6 +239,19 @@ void __page_frag_cache_drain(struct page *page, unsigned int count) } EXPORT_SYMBOL(__page_frag_cache_drain); +/** + * __page_frag_alloc_va_align() - Alloc a page fragment with aligning + * requirement. + * @nc: page_frag cache from which to allocate + * @fragsz: the requested fragment size + * @gfp_mask: the allocation gfp to use when cache need to be refilled + * @align_mask: the requested aligning requirement for the 'va' + * + * Get a page fragment from page_frag cache with aligning requirement. + * + * Return: + * Return va of the page fragment, otherwise return NULL. + */ void *__page_frag_alloc_va_align(struct page_frag_cache *nc, unsigned int fragsz, gfp_t gfp_mask, unsigned int align_mask) @@ -260,8 +317,12 @@ void *__page_frag_alloc_va_align(struct page_frag_cache *nc, } EXPORT_SYMBOL(__page_frag_alloc_va_align); -/* - * Frees a page fragment allocated out of either a compound or order 0 page. +/** + * page_frag_free_va - Free a page fragment. + * @addr: va of page fragment to be freed + * + * Free a page fragment allocated out of either a compound or order 0 page by + * virtual address. */ void page_frag_free_va(void *addr) {