From patchwork Mon Jun 17 13:14:11 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Yunsheng Lin X-Patchwork-Id: 13700612 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 7ADCBC27C79 for ; Mon, 17 Jun 2024 13:18:02 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 02AB46B0197; Mon, 17 Jun 2024 09:18:02 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id F1D7C6B019D; Mon, 17 Jun 2024 09:18:01 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id E0C5E6B019F; Mon, 17 Jun 2024 09:18:01 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0012.hostedemail.com [216.40.44.12]) by kanga.kvack.org (Postfix) with ESMTP id BF5E46B0197 for ; Mon, 17 Jun 2024 09:18:01 -0400 (EDT) Received: from smtpin26.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay01.hostedemail.com (Postfix) with ESMTP id 5E2DB1C2D36 for ; Mon, 17 Jun 2024 13:18:01 +0000 (UTC) X-FDA: 82240433562.26.6E6C9C3 Received: from szxga04-in.huawei.com (szxga04-in.huawei.com [45.249.212.190]) by imf30.hostedemail.com (Postfix) with ESMTP id 9B9CD80005 for ; Mon, 17 Jun 2024 13:17:58 +0000 (UTC) Authentication-Results: imf30.hostedemail.com; dkim=none; spf=pass (imf30.hostedemail.com: domain of linyunsheng@huawei.com designates 45.249.212.190 as permitted sender) smtp.mailfrom=linyunsheng@huawei.com; dmarc=pass (policy=quarantine) header.from=huawei.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1718630275; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=iwrl7riNv/gg86PU8C6BLdHccm9PTfnTj0hqaPjE9Fk=; b=zxRcTImItNS3qWB7CeI+HSyaGEenPEFkBk49duoSjnSW/hzwQRfuppxz9IG+E/d5PEy6vU jEx9E10Sf2wRB8EURKzmquQZrYIIoS6i3+5gjVqy04/6QDCmTk6tx/mHFB9LMU0WI+AtgK Ts6zZnfunsshaXWoSOSbYt7t5Wm1dl8= ARC-Authentication-Results: i=1; imf30.hostedemail.com; dkim=none; spf=pass (imf30.hostedemail.com: domain of linyunsheng@huawei.com designates 45.249.212.190 as permitted sender) smtp.mailfrom=linyunsheng@huawei.com; dmarc=pass (policy=quarantine) header.from=huawei.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1718630275; a=rsa-sha256; cv=none; b=PkeL4k4CrI4+1V6rpcsOqT+6ya2RHj3f5zV8y98/AkPrxUAQN1I0n452UvUzbsMwLk7Yzj AHH9CJdARzILaXlzHF9+yl6ZIQZ/Q6C9ELiS7Fmtxi7QhLry9bAjkbGbx5AuOaTnYV6u6L +NPZwJTOwCr0kMxROzM2xtHquYQWgtc= Received: from mail.maildlp.com (unknown [172.19.162.112]) by szxga04-in.huawei.com (SkyGuard) with ESMTP id 4W2r1l2Gklz2CkDP; Mon, 17 Jun 2024 21:14:03 +0800 (CST) Received: from dggpemf200006.china.huawei.com (unknown [7.185.36.61]) by mail.maildlp.com (Postfix) with ESMTPS id 6ED3F14037E; Mon, 17 Jun 2024 21:17:55 +0800 (CST) Received: from localhost.localdomain (10.69.192.56) by dggpemf200006.china.huawei.com (7.185.36.61) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1544.11; Mon, 17 Jun 2024 21:17:55 +0800 From: Yunsheng Lin To: , , CC: , , Yunsheng Lin , Alexander Duyck , Jonathan Corbet , Andrew Morton , , Subject: [PATCH net-next v8 12/13] mm: page_frag: update documentation for page_frag Date: Mon, 17 Jun 2024 21:14:11 +0800 Message-ID: <20240617131413.25189-13-linyunsheng@huawei.com> X-Mailer: git-send-email 2.33.0 In-Reply-To: <20240617131413.25189-1-linyunsheng@huawei.com> References: <20240617131413.25189-1-linyunsheng@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.69.192.56] X-ClientProxiedBy: dggems705-chm.china.huawei.com (10.3.19.182) To dggpemf200006.china.huawei.com (7.185.36.61) X-Stat-Signature: 1jm19arfbbna8tdyjkrg65e7wysce5ua X-Rspam-User: X-Rspamd-Queue-Id: 9B9CD80005 X-Rspamd-Server: rspam02 X-HE-Tag: 1718630278-502364 X-HE-Meta: U2FsdGVkX185biIqA4uVSqr2tbD2Li1pUDzR0y480jT49B63Q8EXZ6iPreCiodWOZnl+QirTfsfeaGaR2WqIz4irw73JlcW7CkKBLFEIJac0RA9gPeeCPHIpf1yTKZiOmMSfKaObhQpd28BLPdnz4JRCiyCAliNTCmY9bDJvT7YefwgaGOwaaKqE/Lu0uvrZG77uvrZizK/TMgSicC5zcfMGE6JtUqpz6Hk5u+6KHTrx/5cx32+BDEnX8KFnPl4LRrvfRkfPQfG7Z4+Fmff3dp2hQxlwzBvGOLkkm8DCc8PXI9e3AwPnEnM6Cqv0wBtIQ9dp/AmVXfpgV6La2qPMT+5z1KJld1U4ORIieE02GypHkJRpdiTzw8iMI7fALIA3LBNBOdpcW0XnzjWVb3e4lCb/WXFzbfpDYawemoYH5DMriwTFEonZNrJLdCwtBP5p7eWF3FeydauB6MBYszohVrH3D1VCcWP0pmU2Mnq1uBJq8TX5XECLfCZi/VW7O78YLQ3haFXTQlayKhVr2glOI/fkViRvDj7Sbgki7aRTOPfUrdH0ezRlBJ6PSsHaM6C8wLTK1MffVhI1nYGRD/+OiIzgU60jncL6nmU3rJbvZ7bWiA4Okej3LqKUW3x+VvlDbOvj/aBVH1lgBQXdskdCd6o/JXfhzvs+8T2gue8dbZM2aVfmTucjZgHSWUnYNTrfiw//ePnBwyVAA+BYsvpblnYIAyrL8E3pR9LpCbCjrBn3jNDeMnXpPiwLdHPIzSXForKwsU3gI3jx93KweiGb+X1yBgC4WgqlxidDPlmOhpYvnpej6gCZ06AHluX3w/uDu6xsjP31zQ/5lH2cKj11RnyTfNwTxRUl82zehICH8mIumai8CPJwjWubq3iDXOhO+4XBgAap8EPwZR+SjJ5QXZA/83KUEGTKo1ea5rs1rVPWfX1N+s4yR9FMwvP/bPrwsbAg+sLnCThVW0yE02N S12TaxA/ OcKj32V08uNZz286amrzmozet2vc3f3ZAMiyhmfBuib8VUI6NfqnSyoiWY9X02fZq6Dq62fzXDO2xfnSPaXGTwxRvP6FIbArVDqkiGxAMmAxUxRsLUfhi3m2S9uYQhO2hkkbB4SOQRhDB+dBEMMd19Pb1EqqDya3oKZDfW0bzTePzyQ7XYi8HKliHzwkVJ2K6DITzf7qnuCLpSXc/L9qxZhbSkutxO/qVVmc2QibsiLNOJ0Q9uE7oAskr9mkcS/T98wUnGEMB0gDeknjvW5+vn+I9he3CPku9BaO2 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: Update documentation about design, implementation and API usages for page_frag. CC: Alexander Duyck Signed-off-by: Yunsheng Lin --- Documentation/mm/page_frags.rst | 163 +++++++++++++++++++++++++++++++- include/linux/page_frag_cache.h | 107 +++++++++++++++++++++ mm/page_frag_cache.c | 77 ++++++++++++++- 3 files changed, 344 insertions(+), 3 deletions(-) diff --git a/Documentation/mm/page_frags.rst b/Documentation/mm/page_frags.rst index 503ca6cdb804..6a4ac2616098 100644 --- a/Documentation/mm/page_frags.rst +++ b/Documentation/mm/page_frags.rst @@ -1,3 +1,5 @@ +.. SPDX-License-Identifier: GPL-2.0 + ============== Page fragments ============== @@ -40,4 +42,163 @@ page via a single call. The advantage to doing this is that it allows for cleaning up the multiple references that were added to a page in order to avoid calling get_page per allocation. -Alexander Duyck, Nov 29, 2016. + +Architecture overview +===================== + +.. code-block:: none + + +----------------------+ + | page_frag API caller | + +----------------------+ + | + | + v + +---------------------------------------------------------------+ + | request page fragment | + +---------------------------------------------------------------+ + | | | + | | | + | Cache not enough | + | | | + | v | + Cache empty +-----------------+ | + | | drain old cache | | + | +-----------------+ | + | | | + v_________________________________v | + | | + | | + _________________v_______________ | + | | Cache is enough + | | | + PAGE_SIZE < PAGE_FRAG_CACHE_MAX_SIZE | | + | | | + | PAGE_SIZE >= PAGE_FRAG_CACHE_MAX_SIZE | + v | | + +----------------------------------+ | | + | refill cache with order > 0 page | | | + +----------------------------------+ | | + | | | | + | | | | + | Refill failed | | + | | | | + | v v | + | +------------------------------------+ | + | | refill cache with order 0 page | | + | +----------------------------------=-+ | + | | | + Refill succeed | | + | Refill succeed | + | | | + v v v + +---------------------------------------------------------------+ + | allocate fragment from cache | + +---------------------------------------------------------------+ + +API interface +============= +As the design and implementation of page_frag API implies, the allocation side +does not allow concurrent calling. Instead it is assumed that the caller must +ensure there is not concurrent alloc calling to the same page_frag_cache +instance by using its own lock or rely on some lockless guarantee like NAPI +softirq. + +Depending on different aligning requirement, the page_frag API caller may call +page_frag_alloc*_align*() to ensure the returned virtual address or offset of +the page is aligned according to the 'align/alignment' parameter. Note the size +of the allocated fragment is not aligned, the caller needs to provide an aligned +fragsz if there is an alignment requirement for the size of the fragment. + +Depending on different use cases, callers expecting to deal with va, page or +both va and page for them may call page_frag_alloc_va*, page_frag_alloc_pg*, +or page_frag_alloc* API accordingly. + +There is also a use case that needs minimum memory in order for forward progress, +but more performant if more memory is available. Using page_frag_alloc_prepare() +and page_frag_alloc_commit() related API, the caller requests the minimum memory +it needs and the prepare API will return the maximum size of the fragment +returned. The caller needs to either call the commit API to report how much +memory it actually uses, or not do so if deciding to not use any memory. + +.. kernel-doc:: include/linux/page_frag_cache.h + :identifiers: page_frag_cache_init page_frag_cache_is_pfmemalloc + page_frag_cache_page_offset page_frag_alloc_va + page_frag_alloc_va_align page_frag_alloc_va_prepare_align + page_frag_alloc_probe page_frag_alloc_commit + page_frag_alloc_commit_noref page_frag_alloc_abort + +.. kernel-doc:: mm/page_frag_cache.c + :identifiers: __page_frag_alloc_va_align page_frag_alloc_pg + page_frag_alloc_va_prepare page_frag_alloc_pg_prepare + page_frag_alloc_prepare page_frag_cache_drain + page_frag_free_va + +Coding examples +=============== + +Init & Drain API +---------------- + +.. code-block:: c + + page_frag_cache_init(pfrag); + ... + page_frag_cache_drain(pfrag); + + +Alloc & Free API +---------------- + +.. code-block:: c + + void *va; + + va = page_frag_alloc_va_align(pfrag, size, gfp, align); + if (!va) + goto do_error; + + err = do_something(va, size); + if (err) { + page_frag_free_va(va); + goto do_error; + } + +Prepare & Commit API +-------------------- + +.. code-block:: c + + unsigned int offset, size; + bool merge = true; + struct page *page; + void *va; + + size = 32U; + page = page_frag_alloc_prepare(pfrag, &offset, &size, &va); + if (!page) + goto wait_for_space; + + copy = min_t(unsigned int, copy, size); + if (!skb_can_coalesce(skb, i, page, offset)) { + if (i >= max_skb_frags) + goto new_segment; + + merge = false; + } + + copy = mem_schedule(copy); + if (!copy) + goto wait_for_space; + + err = copy_from_iter_full_nocache(va, copy, iter); + if (err) + goto do_error; + + if (merge) { + skb_frag_size_add(&skb_shinfo(skb)->frags[i - 1], copy); + page_frag_alloc_commit_noref(pfrag, offset, copy); + } else { + skb_fill_page_desc(skb, i, page, offset, copy); + page_frag_alloc_commit(pfrag, offset, copy); + } diff --git a/include/linux/page_frag_cache.h b/include/linux/page_frag_cache.h index e95d44a36ec9..234b9b1c6f63 100644 --- a/include/linux/page_frag_cache.h +++ b/include/linux/page_frag_cache.h @@ -71,11 +71,28 @@ struct page_frag_cache { #endif }; +/** + * page_frag_cache_init() - Init page_frag cache. + * @nc: page_frag cache from which to init + * + * Inline helper to init the page_frag cache. + */ static inline void page_frag_cache_init(struct page_frag_cache *nc) { memset(nc, 0, sizeof(*nc)); } +/** + * page_frag_cache_is_pfmemalloc() - Check for pfmemalloc. + * @nc: page_frag cache from which to check + * + * Used to check if the current page in page_frag cache is pfmemalloc'ed. + * It has the same calling context expection as the alloc API. + * + * Return: + * true if the current page in page_frag cache is pfmemalloc'ed, otherwise + * return false. + */ static inline bool page_frag_cache_is_pfmemalloc(struct page_frag_cache *nc) { return encoded_page_pfmemalloc(nc->encoded_va); @@ -95,6 +112,19 @@ void *__page_frag_alloc_va_align(struct page_frag_cache *nc, unsigned int fragsz, gfp_t gfp_mask, unsigned int align_mask); +/** + * page_frag_alloc_va_align() - Alloc a page fragment with aligning requirement. + * @nc: page_frag cache from which to allocate + * @fragsz: the requested fragment size + * @gfp_mask: the allocation gfp to use when cache needs to be refilled + * @align: the requested aligning requirement for virtual address of fragment + * + * WARN_ON_ONCE() checking for @align before allocing a page fragment from + * page_frag cache with aligning requirement. + * + * Return: + * virtual address of the page fragment, otherwise return NULL. + */ static inline void *page_frag_alloc_va_align(struct page_frag_cache *nc, unsigned int fragsz, gfp_t gfp_mask, unsigned int align) @@ -103,11 +133,32 @@ static inline void *page_frag_alloc_va_align(struct page_frag_cache *nc, return __page_frag_alloc_va_align(nc, fragsz, gfp_mask, -align); } +/** + * page_frag_cache_page_offset() - Return the current page fragment's offset. + * @nc: page_frag cache from which to check + * + * The API is only used in net/sched/em_meta.c for historical reason, do not use + * it for new caller unless there is a strong reason. + * + * Return: + * the offset of the current page fragment in the page_frag cache. + */ static inline unsigned int page_frag_cache_page_offset(const struct page_frag_cache *nc) { return page_frag_cache_page_size(nc->encoded_va) - nc->remaining; } +/** + * page_frag_alloc_va() - Alloc a page fragment. + * @nc: page_frag cache from which to allocate + * @fragsz: the requested fragment size + * @gfp_mask: the allocation gfp to use when cache need to be refilled + * + * Get a page fragment from page_frag cache. + * + * Return: + * virtual address of the page fragment, otherwise return NULL. + */ static inline void *page_frag_alloc_va(struct page_frag_cache *nc, unsigned int fragsz, gfp_t gfp_mask) { @@ -117,6 +168,21 @@ static inline void *page_frag_alloc_va(struct page_frag_cache *nc, void *page_frag_alloc_va_prepare(struct page_frag_cache *nc, unsigned int *fragsz, gfp_t gfp); +/** + * page_frag_alloc_va_prepare_align() - Prepare allocing a page fragment with + * aligning requirement. + * @nc: page_frag cache from which to prepare + * @fragsz: in as the requested size, out as the available size + * @gfp: the allocation gfp to use when cache need to be refilled + * @align: the requested aligning requirement + * + * WARN_ON_ONCE() checking for @align before preparing an aligned page fragment + * with minimum size of @fragsz, @fragsz is also used to report the maximum size + * of the page fragment the caller can use. + * + * Return: + * virtual address of the page fragment, otherwise return NULL. + */ static inline void *page_frag_alloc_va_prepare_align(struct page_frag_cache *nc, unsigned int *fragsz, gfp_t gfp, @@ -151,6 +217,21 @@ static inline struct encoded_va *__page_frag_alloc_probe(struct page_frag_cache return encoded_va; } +/** + * page_frag_alloc_probe - Probe the available page fragment. + * @nc: page_frag cache from which to probe + * @offset: out as the offset of the page fragment + * @fragsz: in as the requested size, out as the available size + * @va: out as the virtual address of the returned page fragment + * + * Probe the current available memory to caller without doing cache refilling. + * If no space is available in the page_frag cache, return NULL. + * If the requested space is available, up to @fragsz bytes may be added to the + * fragment using commit API. + * + * Return: + * the page fragment, otherwise return NULL. + */ #define page_frag_alloc_probe(nc, offset, fragsz, va) \ ({ \ struct page *__page = NULL; \ @@ -165,6 +246,14 @@ static inline struct encoded_va *__page_frag_alloc_probe(struct page_frag_cache __page; \ }) +/** + * page_frag_alloc_commit - Commit allocing a page fragment. + * @nc: page_frag cache from which to commit + * @fragsz: size of the page fragment has been used + * + * Commit the actual used size for the allocation that was either prepared or + * probed. + */ static inline void page_frag_alloc_commit(struct page_frag_cache *nc, unsigned int fragsz) { @@ -173,6 +262,16 @@ static inline void page_frag_alloc_commit(struct page_frag_cache *nc, nc->remaining -= fragsz; } +/** + * page_frag_alloc_commit_noref - Commit allocing a page fragment without taking + * page refcount. + * @nc: page_frag cache from which to commit + * @fragsz: size of the page fragment has been used + * + * Commit the alloc preparing or probing by passing the actual used size, but + * not taking refcount. Mostly used for fragmemt coalescing case when the + * current fragment can share the same refcount with previous fragment. + */ static inline void page_frag_alloc_commit_noref(struct page_frag_cache *nc, unsigned int fragsz) { @@ -180,6 +279,14 @@ static inline void page_frag_alloc_commit_noref(struct page_frag_cache *nc, nc->remaining -= fragsz; } +/** + * page_frag_alloc_abort - Abort the page fragment allocation. + * @nc: page_frag cache to which the page fragment is aborted back + * @fragsz: size of the page fragment to be aborted + * + * It is expected to be called from the same context as the alloc API. + * Mostly used for error handling cases where the fragment is no longer needed. + */ static inline void page_frag_alloc_abort(struct page_frag_cache *nc, unsigned int fragsz) { diff --git a/mm/page_frag_cache.c b/mm/page_frag_cache.c index a6eb0ab2e7f9..b42864ee6f5d 100644 --- a/mm/page_frag_cache.c +++ b/mm/page_frag_cache.c @@ -91,6 +91,18 @@ static struct page *__page_frag_cache_refill(struct page_frag_cache *nc, return page; } +/** + * page_frag_alloc_va_prepare() - Prepare allocing a page fragment. + * @nc: page_frag cache from which to prepare + * @fragsz: in as the requested size, out as the available size + * @gfp: the allocation gfp to use when cache needs to be refilled + * + * Prepare a page fragment with minimum size of @fragsz, @fragsz is also used + * to report the maximum size of the page fragment the caller can use. + * + * Return: + * virtual address of the page fragment, otherwise return NULL. + */ void *page_frag_alloc_va_prepare(struct page_frag_cache *nc, unsigned int *fragsz, gfp_t gfp) { @@ -113,6 +125,19 @@ void *page_frag_alloc_va_prepare(struct page_frag_cache *nc, } EXPORT_SYMBOL(page_frag_alloc_va_prepare); +/** + * page_frag_alloc_pg_prepare - Prepare allocing a page fragment. + * @nc: page_frag cache from which to prepare + * @offset: out as the offset of the page fragment + * @fragsz: in as the requested size, out as the available size + * @gfp: the allocation gfp to use when cache needs to be refilled + * + * Prepare a page fragment with minimum size of @fragsz, @fragsz is also used + * to report the maximum size of the page fragment the caller can use. + * + * Return: + * the page fragment, otherwise return NULL. + */ struct page *page_frag_alloc_pg_prepare(struct page_frag_cache *nc, unsigned int *offset, unsigned int *fragsz, gfp_t gfp) @@ -143,6 +168,21 @@ struct page *page_frag_alloc_pg_prepare(struct page_frag_cache *nc, } EXPORT_SYMBOL(page_frag_alloc_pg_prepare); +/** + * page_frag_alloc_prepare - Prepare allocing a page fragment. + * @nc: page_frag cache from which to prepare + * @offset: out as the offset of the page fragment + * @fragsz: in as the requested size, out as the available size + * @va: out as the virtual address of the returned page fragment + * @gfp: the allocation gfp to use when cache needs to be refilled + * + * Prepare a page fragment with minimum size of @fragsz, @fragsz is also used + * to report the maximum size of the page fragment. Return both 'struct page' + * and virtual address of the fragment to the caller. + * + * Return: + * the page fragment, otherwise return NULL. + */ struct page *page_frag_alloc_prepare(struct page_frag_cache *nc, unsigned int *offset, unsigned int *fragsz, @@ -175,6 +215,18 @@ struct page *page_frag_alloc_prepare(struct page_frag_cache *nc, } EXPORT_SYMBOL(page_frag_alloc_prepare); +/** + * page_frag_alloc_pg - Alloce a page fragment. + * @nc: page_frag cache from which to alloce + * @offset: out as the offset of the page fragment + * @fragsz: the requested fragment size + * @gfp: the allocation gfp to use when cache needs to be refilled + * + * Get a page fragment from page_frag cache. + * + * Return: + * the page fragment, otherwise return NULL. + */ struct page *page_frag_alloc_pg(struct page_frag_cache *nc, unsigned int *offset, unsigned int fragsz, gfp_t gfp) @@ -205,6 +257,10 @@ struct page *page_frag_alloc_pg(struct page_frag_cache *nc, } EXPORT_SYMBOL(page_frag_alloc_pg); +/** + * page_frag_cache_drain - Drain the current page from page_frag cache. + * @nc: page_frag cache from which to drain + */ void page_frag_cache_drain(struct page_frag_cache *nc) { if (!nc->encoded_va) @@ -225,6 +281,19 @@ void __page_frag_cache_drain(struct page *page, unsigned int count) } EXPORT_SYMBOL(__page_frag_cache_drain); +/** + * __page_frag_alloc_va_align() - Alloc a page fragment with aligning + * requirement. + * @nc: page_frag cache from which to allocate + * @fragsz: the requested fragment size + * @gfp_mask: the allocation gfp to use when cache need to be refilled + * @align_mask: the requested aligning requirement for the 'va' + * + * Get a page fragment from page_frag cache with aligning requirement. + * + * Return: + * Return va of the page fragment, otherwise return NULL. + */ void *__page_frag_alloc_va_align(struct page_frag_cache *nc, unsigned int fragsz, gfp_t gfp_mask, unsigned int align_mask) @@ -260,8 +329,12 @@ void *__page_frag_alloc_va_align(struct page_frag_cache *nc, } EXPORT_SYMBOL(__page_frag_alloc_va_align); -/* - * Frees a page fragment allocated out of either a compound or order 0 page. +/** + * page_frag_free_va - Free a page fragment. + * @addr: va of page fragment to be freed + * + * Free a page fragment allocated out of either a compound or order 0 page by + * virtual address. */ void page_frag_free_va(void *addr) {