From patchwork Mon Jan 23 19:17:23 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "T.J. Mercier" X-Patchwork-Id: 13112838 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id BD28DC38142 for ; Mon, 23 Jan 2023 19:18:00 +0000 (UTC) Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id 0427C10E1F8; Mon, 23 Jan 2023 19:18:00 +0000 (UTC) Received: from mail-yw1-x1149.google.com (mail-yw1-x1149.google.com [IPv6:2607:f8b0:4864:20::1149]) by gabe.freedesktop.org (Postfix) with ESMTPS id 8474710E1F4 for ; Mon, 23 Jan 2023 19:17:57 +0000 (UTC) Received: by mail-yw1-x1149.google.com with SMTP id 00721157ae682-4fa63c84621so129070257b3.20 for ; Mon, 23 Jan 2023 11:17:57 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=29eNHuXkN4Biw7YjDro7DKxD9MLgh2JPpCUKKyMlqpA=; b=Subdt7mbpSJ3q++6bJrXZjJZZw2B5aXVhcYIOjpWp1e7w65W6r5oAUZpnmuhuuxtGC m6yHwVIfAM74kNHnxjivy2kMGAg6vCfv+3hhWBYw2BJN0Dja1Y7NQKOgWr6COa+8oUkk rUnUidJgv1sRL2Bt6YdHCPqVzpfizdrAy9YnM1stYfRMcfzP4DXyiUe0ObHRm1rO0ZdX Z1ZA/fI74arWwHwvDOYrrcmDZGCsMH3Nq+OQolvzt4CNc9tJFsaB0vn3Yw+H2aqetp8n bmppFH7vBRP4aW++HEHSq4uD8ir3GpzrY/G/loULfk/4Fe0e7jWIDY4IGgw0L9zKW6Fz 9YCg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=29eNHuXkN4Biw7YjDro7DKxD9MLgh2JPpCUKKyMlqpA=; b=ypQAmmdDjVInujwczZZuRdkDr47nmmu3lrrnsPdYytFUI62h6tTb50I6FrqYdfy4Xr gQyWk1PJ7wnkauSb/VU05IgAbE3hu3954/1eTSlxKkkIJgmgDXSdMLyO7ljMVmwi4K/r uFY1NOP9uMv7jefONcIQ9gohWSiykX0F/g68Nl/ppOBdRQkZL62zaUJkWapN5UklyU4j ArvUYaBmrgB5+LY59TFTJYfXsdubfCeMbri+GB7aoznWGPun3C52czPgvyUEC3/RNsIo YOoHkseFlZDQNTHMRHGcxWsL9N7tbtB7r6drhUay+8sp4649NdQ69yZ+Nck8w4e775V4 emCA== X-Gm-Message-State: AFqh2krYGHl60u0eDO1A5vyk10AMyJgLiD6Q1Z3kYiR+phJQVlPuM05Q D+NfSzJu4mQ+uCNeUDD8XOv96S2PDbMfriA= X-Google-Smtp-Source: AMrXdXuIEmBCRRQ29mCX2Lz8F60RAV61DpR49m4PqdkPLl3Fou0sh8htd5KZF+eNK2HXmLDFr4FD/qKGK/2gVtE= X-Received: from tj.c.googlers.com ([fda3:e722:ac3:cc00:20:ed76:c0a8:53a]) (user=tjmercier job=sendgmr) by 2002:a81:5553:0:b0:4d1:959d:fa49 with SMTP id j80-20020a815553000000b004d1959dfa49mr3602848ywb.33.1674501476646; Mon, 23 Jan 2023 11:17:56 -0800 (PST) Date: Mon, 23 Jan 2023 19:17:23 +0000 In-Reply-To: <20230123191728.2928839-1-tjmercier@google.com> Mime-Version: 1.0 References: <20230123191728.2928839-1-tjmercier@google.com> X-Mailer: git-send-email 2.39.0.246.g2a6d74b583-goog Message-ID: <20230123191728.2928839-2-tjmercier@google.com> Subject: [PATCH v2 1/4] memcg: Track exported dma-buffers From: "T.J. Mercier" To: tjmercier@google.com, Tejun Heo , Zefan Li , Johannes Weiner , Jonathan Corbet , Sumit Semwal , " =?utf-8?q?Christian_K=C3=B6nig?= " , Michal Hocko , Roman Gushchin , Shakeel Butt , Muchun Song , Andrew Morton X-BeenThere: dri-devel@lists.freedesktop.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Direct Rendering Infrastructure - Development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: android-mm@google.com, linux-doc@vger.kernel.org, selinux@vger.kernel.org, daniel.vetter@ffwll.ch, cmllamas@google.com, dri-devel@lists.freedesktop.org, linux-kernel@vger.kernel.org, linaro-mm-sig@lists.linaro.org, linux-mm@kvack.org, linux-security-module@vger.kernel.org, jstultz@google.com, jeffv@google.com, cgroups@vger.kernel.org, linux-media@vger.kernel.org Errors-To: dri-devel-bounces@lists.freedesktop.org Sender: "dri-devel" When a buffer is exported to userspace, use memcg to attribute the buffer to the allocating cgroup until all buffer references are released. Unlike the dmabuf sysfs stats implementation, this memcg accounting avoids contention over the kernfs_rwsem incurred when creating or removing nodes. Signed-off-by: T.J. Mercier --- Documentation/admin-guide/cgroup-v2.rst | 4 +++ drivers/dma-buf/dma-buf.c | 13 +++++++++ include/linux/dma-buf.h | 3 ++ include/linux/memcontrol.h | 38 +++++++++++++++++++++++++ mm/memcontrol.c | 19 +++++++++++++ 5 files changed, 77 insertions(+) diff --git a/Documentation/admin-guide/cgroup-v2.rst b/Documentation/admin-guide/cgroup-v2.rst index c8ae7c897f14..538ae22bc514 100644 --- a/Documentation/admin-guide/cgroup-v2.rst +++ b/Documentation/admin-guide/cgroup-v2.rst @@ -1455,6 +1455,10 @@ PAGE_SIZE multiple when read back. Amount of memory used for storing in-kernel data structures. + dmabuf (npn) + Amount of memory used for exported DMA buffers allocated by the cgroup. + Stays with the allocating cgroup regardless of how the buffer is shared. + workingset_refault_anon Number of refaults of previously evicted anonymous pages. diff --git a/drivers/dma-buf/dma-buf.c b/drivers/dma-buf/dma-buf.c index e6528767efc7..a6a8cb5cb32d 100644 --- a/drivers/dma-buf/dma-buf.c +++ b/drivers/dma-buf/dma-buf.c @@ -75,6 +75,9 @@ static void dma_buf_release(struct dentry *dentry) */ BUG_ON(dmabuf->cb_in.active || dmabuf->cb_out.active); + mem_cgroup_uncharge_dmabuf(dmabuf->memcg, PAGE_ALIGN(dmabuf->size) / PAGE_SIZE); + mem_cgroup_put(dmabuf->memcg); + dma_buf_stats_teardown(dmabuf); dmabuf->ops->release(dmabuf); @@ -673,6 +676,13 @@ struct dma_buf *dma_buf_export(const struct dma_buf_export_info *exp_info) if (ret) goto err_dmabuf; + dmabuf->memcg = get_mem_cgroup_from_mm(current->mm); + if (!mem_cgroup_charge_dmabuf(dmabuf->memcg, PAGE_ALIGN(dmabuf->size) / PAGE_SIZE, + GFP_KERNEL)) { + ret = -ENOMEM; + goto err_memcg; + } + file->private_data = dmabuf; file->f_path.dentry->d_fsdata = dmabuf; dmabuf->file = file; @@ -683,6 +693,9 @@ struct dma_buf *dma_buf_export(const struct dma_buf_export_info *exp_info) return dmabuf; +err_memcg: + mem_cgroup_put(dmabuf->memcg); + dma_buf_stats_teardown(dmabuf); err_dmabuf: if (!resv) dma_resv_fini(dmabuf->resv); diff --git a/include/linux/dma-buf.h b/include/linux/dma-buf.h index 6fa8d4e29719..1f0ffb8e4bf5 100644 --- a/include/linux/dma-buf.h +++ b/include/linux/dma-buf.h @@ -22,6 +22,7 @@ #include #include #include +#include struct device; struct dma_buf; @@ -446,6 +447,8 @@ struct dma_buf { struct dma_buf *dmabuf; } *sysfs_entry; #endif + /* The cgroup to which this buffer is currently attributed */ + struct mem_cgroup *memcg; }; /** diff --git a/include/linux/memcontrol.h b/include/linux/memcontrol.h index d3c8203cab6c..c10b8565fdbf 100644 --- a/include/linux/memcontrol.h +++ b/include/linux/memcontrol.h @@ -37,6 +37,7 @@ enum memcg_stat_item { MEMCG_KMEM, MEMCG_ZSWAP_B, MEMCG_ZSWAPPED, + MEMCG_DMABUF, MEMCG_NR_STAT, }; @@ -673,6 +674,25 @@ static inline int mem_cgroup_charge(struct folio *folio, struct mm_struct *mm, int mem_cgroup_swapin_charge_folio(struct folio *folio, struct mm_struct *mm, gfp_t gfp, swp_entry_t entry); + +/** + * mem_cgroup_charge_dmabuf - Charge dma-buf memory to a cgroup and update stat counter + * @memcg: memcg to charge + * @nr_pages: number of pages to charge + * @gfp_mask: reclaim mode + * + * Charges @nr_pages to @memcg. Returns %true if the charge fit within + * @memcg's configured limit, %false if it doesn't. + */ +bool __mem_cgroup_charge_dmabuf(struct mem_cgroup *memcg, unsigned int nr_pages, gfp_t gfp_mask); +static inline bool mem_cgroup_charge_dmabuf(struct mem_cgroup *memcg, unsigned int nr_pages, + gfp_t gfp_mask) +{ + if (mem_cgroup_disabled()) + return 0; + return __mem_cgroup_charge_dmabuf(memcg, nr_pages, gfp_mask); +} + void mem_cgroup_swapin_uncharge_swap(swp_entry_t entry); void __mem_cgroup_uncharge(struct folio *folio); @@ -690,6 +710,14 @@ static inline void mem_cgroup_uncharge(struct folio *folio) __mem_cgroup_uncharge(folio); } +void __mem_cgroup_uncharge_dmabuf(struct mem_cgroup *memcg, unsigned int nr_pages); +static inline void mem_cgroup_uncharge_dmabuf(struct mem_cgroup *memcg, unsigned int nr_pages) +{ + if (mem_cgroup_disabled()) + return; + __mem_cgroup_uncharge_dmabuf(memcg, nr_pages); +} + void __mem_cgroup_uncharge_list(struct list_head *page_list); static inline void mem_cgroup_uncharge_list(struct list_head *page_list) { @@ -1242,6 +1270,12 @@ static inline int mem_cgroup_swapin_charge_folio(struct folio *folio, return 0; } +static inline bool mem_cgroup_charge_dmabuf(struct mem_cgroup *memcg, unsigned int nr_pages, + gfp_t gfp_mask) +{ + return true; +} + static inline void mem_cgroup_swapin_uncharge_swap(swp_entry_t entry) { } @@ -1250,6 +1284,10 @@ static inline void mem_cgroup_uncharge(struct folio *folio) { } +static inline void mem_cgroup_uncharge_dmabuf(struct mem_cgroup *memcg, unsigned int nr_pages) +{ +} + static inline void mem_cgroup_uncharge_list(struct list_head *page_list) { } diff --git a/mm/memcontrol.c b/mm/memcontrol.c index ab457f0394ab..375d18370f4b 100644 --- a/mm/memcontrol.c +++ b/mm/memcontrol.c @@ -1502,6 +1502,7 @@ static const struct memory_stat memory_stats[] = { { "unevictable", NR_UNEVICTABLE }, { "slab_reclaimable", NR_SLAB_RECLAIMABLE_B }, { "slab_unreclaimable", NR_SLAB_UNRECLAIMABLE_B }, + { "dmabuf", MEMCG_DMABUF }, /* The memory events */ { "workingset_refault_anon", WORKINGSET_REFAULT_ANON }, @@ -4042,6 +4043,7 @@ static const unsigned int memcg1_stats[] = { WORKINGSET_REFAULT_ANON, WORKINGSET_REFAULT_FILE, MEMCG_SWAP, + MEMCG_DMABUF, }; static const char *const memcg1_stat_names[] = { @@ -4057,6 +4059,7 @@ static const char *const memcg1_stat_names[] = { "workingset_refault_anon", "workingset_refault_file", "swap", + "dmabuf", }; /* Universal VM events cgroup1 shows, original sort order */ @@ -7299,6 +7302,22 @@ void mem_cgroup_uncharge_skmem(struct mem_cgroup *memcg, unsigned int nr_pages) refill_stock(memcg, nr_pages); } +bool __mem_cgroup_charge_dmabuf(struct mem_cgroup *memcg, unsigned int nr_pages, gfp_t gfp_mask) +{ + if (try_charge(memcg, gfp_mask, nr_pages) == 0) { + mod_memcg_state(memcg, MEMCG_DMABUF, nr_pages); + return true; + } + + return false; +} + +void __mem_cgroup_uncharge_dmabuf(struct mem_cgroup *memcg, unsigned int nr_pages) +{ + mod_memcg_state(memcg, MEMCG_DMABUF, -nr_pages); + refill_stock(memcg, nr_pages); +} + static int __init cgroup_memory(char *s) { char *token; From patchwork Mon Jan 23 19:17:24 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "T.J. Mercier" X-Patchwork-Id: 13112839 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 551C4C54EAA for ; Mon, 23 Jan 2023 19:18:06 +0000 (UTC) Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id 69FCE10E1F4; Mon, 23 Jan 2023 19:18:05 +0000 (UTC) Received: from mail-yb1-xb49.google.com (mail-yb1-xb49.google.com [IPv6:2607:f8b0:4864:20::b49]) by gabe.freedesktop.org (Postfix) with ESMTPS id 1151710E1F4 for ; Mon, 23 Jan 2023 19:18:04 +0000 (UTC) Received: by mail-yb1-xb49.google.com with SMTP id w14-20020a25ac0e000000b007d519140f18so14008547ybi.3 for ; Mon, 23 Jan 2023 11:18:04 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=9WOZzSEfhu/PT9TWBWMQDCCNsrofWV34oFDU/0sj1mQ=; b=CLJ2JxK/Tm4oa2T67r1/7AoCewa4cPoaCpXx/gVHbZ1m+FYlbdBISpcie92wke9qiT ywnZGTRRWgUyvtV2GeIN6CSF1A1BpACoPsOvCJSqkdka36IG7lwA9jHJB0/1Y37BiDn6 UZyz4QR44eh7OHGKOjLnRuXG7qELlMRlfRNYaxnDlTMYAnQZZAKF7GLGFraikV5q7pPa PqEGE55YfXGgQoOaL42PCJmSLM5OLIrr2pXTamoSK+ZBJQZ3xablgI1e6ZxykShX+C1V OGCeujU9/+VSQurUIb0N4LKyFw0PX0Xvt3l2bOlWDy6qNiZqzVXsJcA5m/zeCrF4rUUR ohNg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=9WOZzSEfhu/PT9TWBWMQDCCNsrofWV34oFDU/0sj1mQ=; b=j/7LGHVAACKVQR09XvFTXTzLnRf6y2p7Rackd2LcUN8BNhSCd99HreEQi3hoKJBWIs vK8bta/o3TpZmHGQKyKI9pTJ9MC4AwGGSYq2AgclMW7gPmDee20j0AXO4uFK5vRAcj+j UwIICatq9Ww4g6nvjMQqSC8OvdA0BHCj8mwpyyrAxyth3oOwCmwGyvNIQlvFe/SH154v ir+RPfocL4dNxNyn4tx7s97bWchSKKk/0hzR8jAp4h8NwR7f0JBaicP9lRlkz6vuuWpr 7TcapUARrfjG00M3jHQoonA3SaRp6lsyXPRp+N3nCIIthGf8RfAfOlCqme34AlFnlj8X 0RqQ== X-Gm-Message-State: AFqh2kqoLieFQ8h2qPCGql6jcPqH2gsghIZlLvSD16EyB8eZUvy5wXjB aEUYWdN1rQS5P+N0jffRAIWK2xynKQHA6V0= X-Google-Smtp-Source: AMrXdXt5UsJV8XnGROzCqwUjMEtxGJTY0AAa2SUnm4rAwiP8wDQKiwcGvs1ceYITQIG6UCVtpU2h3/VNxLbuwi8= X-Received: from tj.c.googlers.com ([fda3:e722:ac3:cc00:20:ed76:c0a8:53a]) (user=tjmercier job=sendgmr) by 2002:a25:dc92:0:b0:7b0:3379:9c00 with SMTP id y140-20020a25dc92000000b007b033799c00mr3149691ybe.359.1674501483256; Mon, 23 Jan 2023 11:18:03 -0800 (PST) Date: Mon, 23 Jan 2023 19:17:24 +0000 In-Reply-To: <20230123191728.2928839-1-tjmercier@google.com> Mime-Version: 1.0 References: <20230123191728.2928839-1-tjmercier@google.com> X-Mailer: git-send-email 2.39.0.246.g2a6d74b583-goog Message-ID: <20230123191728.2928839-3-tjmercier@google.com> Subject: [PATCH v2 2/4] dmabuf: Add cgroup charge transfer function From: "T.J. Mercier" To: tjmercier@google.com, Sumit Semwal , " =?utf-8?q?Christian_K=C3=B6nig?= " X-BeenThere: dri-devel@lists.freedesktop.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Direct Rendering Infrastructure - Development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: android-mm@google.com, selinux@vger.kernel.org, daniel.vetter@ffwll.ch, hannes@cmpxchg.org, cmllamas@google.com, dri-devel@lists.freedesktop.org, linux-kernel@vger.kernel.org, linaro-mm-sig@lists.linaro.org, linux-security-module@vger.kernel.org, jstultz@google.com, jeffv@google.com, cgroups@vger.kernel.org, linux-media@vger.kernel.org Errors-To: dri-devel-bounces@lists.freedesktop.org Sender: "dri-devel" The dma_buf_transfer_charge function provides a way for processes to transfer charge of a buffer to a different cgroup. This is essential for the cases where a central allocator process does allocations for various subsystems, hands over the fd to the client who requested the memory, and drops all references to the allocated memory. Signed-off-by: T.J. Mercier --- drivers/dma-buf/dma-buf.c | 56 ++++++++++++++++++++++++++++++++++++++ include/linux/dma-buf.h | 1 + include/linux/memcontrol.h | 5 ++++ 3 files changed, 62 insertions(+) diff --git a/drivers/dma-buf/dma-buf.c b/drivers/dma-buf/dma-buf.c index a6a8cb5cb32d..ac3d02a7ecf8 100644 --- a/drivers/dma-buf/dma-buf.c +++ b/drivers/dma-buf/dma-buf.c @@ -11,6 +11,7 @@ * refining of this idea. */ +#include #include #include #include @@ -1626,6 +1627,61 @@ void dma_buf_vunmap_unlocked(struct dma_buf *dmabuf, struct iosys_map *map) } EXPORT_SYMBOL_NS_GPL(dma_buf_vunmap_unlocked, DMA_BUF); +/** + * dma_buf_transfer_charge - Change the cgroup to which the provided dma_buf is charged. + * @dmabuf_file: [in] file for buffer whose charge will be migrated to a different cgroup + * @target: [in] the task_struct of the destination process for the cgroup charge + * + * Only tasks that belong to the same cgroup the buffer is currently charged to + * may call this function, otherwise it will return -EPERM. + * + * Returns 0 on success, or a negative errno code otherwise. + */ +int dma_buf_transfer_charge(struct file *dmabuf_file, struct task_struct *target) +{ + struct mem_cgroup *current_cg, *target_cg; + struct dma_buf *dmabuf; + unsigned int nr_pages; + int ret = 0; + + if (!IS_ENABLED(CONFIG_MEMCG)) + return 0; + + if (WARN_ON(!dmabuf_file) || WARN_ON(!target)) + return -EINVAL; + + if (!is_dma_buf_file(dmabuf_file)) + return -EBADF; + dmabuf = dmabuf_file->private_data; + + nr_pages = PAGE_ALIGN(dmabuf->size) / PAGE_SIZE; + current_cg = mem_cgroup_from_task(current); + target_cg = get_mem_cgroup_from_mm(target->mm); + + if (current_cg == target_cg) + goto skip_transfer; + + if (!mem_cgroup_charge_dmabuf(target_cg, nr_pages, GFP_KERNEL)) { + ret = -ENOMEM; + goto skip_transfer; + } + + if (cmpxchg(&dmabuf->memcg, current_cg, target_cg) != current_cg) { + /* Only the current owner can transfer the charge */ + ret = -EPERM; + mem_cgroup_uncharge_dmabuf(target_cg, nr_pages); + goto skip_transfer; + } + + mem_cgroup_uncharge_dmabuf(current_cg, nr_pages); + mem_cgroup_put(current_cg); /* unref from buffer - buffer keeps new ref to target_cg */ + return 0; + +skip_transfer: + mem_cgroup_put(target_cg); + return ret; +} + #ifdef CONFIG_DEBUG_FS static int dma_buf_debug_show(struct seq_file *s, void *unused) { diff --git a/include/linux/dma-buf.h b/include/linux/dma-buf.h index 1f0ffb8e4bf5..f25eb8e60fb2 100644 --- a/include/linux/dma-buf.h +++ b/include/linux/dma-buf.h @@ -634,4 +634,5 @@ int dma_buf_vmap(struct dma_buf *dmabuf, struct iosys_map *map); void dma_buf_vunmap(struct dma_buf *dmabuf, struct iosys_map *map); int dma_buf_vmap_unlocked(struct dma_buf *dmabuf, struct iosys_map *map); void dma_buf_vunmap_unlocked(struct dma_buf *dmabuf, struct iosys_map *map); +int dma_buf_transfer_charge(struct file *dmabuf_file, struct task_struct *target); #endif /* __DMA_BUF_H__ */ diff --git a/include/linux/memcontrol.h b/include/linux/memcontrol.h index c10b8565fdbf..009298a446fe 100644 --- a/include/linux/memcontrol.h +++ b/include/linux/memcontrol.h @@ -1335,6 +1335,11 @@ struct mem_cgroup *mem_cgroup_from_css(struct cgroup_subsys_state *css) return NULL; } +static inline struct mem_cgroup *mem_cgroup_from_task(struct task_struct *p) +{ + return NULL; +} + static inline void obj_cgroup_put(struct obj_cgroup *objcg) { }