diff mbox series

[v16,05/16] x86/sgx: Implement basic EPC misc cgroup functionality

Message ID 20240821015404.6038-6-haitao.huang@linux.intel.com (mailing list archive)
State New, archived
Headers show
Series Add Cgroup support for SGX EPC memory | expand

Commit Message

Haitao Huang Aug. 21, 2024, 1:53 a.m. UTC
From: Kristen Carlson Accardi <kristen@linux.intel.com>

SGX Enclave Page Cache (EPC) memory allocations are separate from normal
RAM allocations, and are managed solely by the SGX subsystem. The
existing cgroup memory controller cannot be used to limit or account for
SGX EPC memory, which is a desirable feature in some environments. For
instance, within a Kubernetes environment, while a user may specify a
particular EPC quota for a pod, the orchestrator requires a mechanism to
enforce that the pod's actual runtime EPC usage does not exceed the
allocated quota.

Utilize the misc controller [admin-guide/cgroup-v2.rst, 5-9. Misc] to
limit and track EPC allocations per cgroup. Earlier patches have added
the "sgx_epc" resource type in the misc cgroup subsystem. Add basic
support in SGX driver as the "sgx_epc" resource provider:

- Set "capacity" of EPC by calling misc_cg_set_capacity()
- Update EPC usage counter, "current", by calling charge and uncharge
APIs for EPC allocation and deallocation, respectively.
- Setup sgx_epc resource type specific callbacks, which perform
initialization and cleanup during cgroup allocation and deallocation,
respectively.

With these changes, the misc cgroup controller enables users to set a hard
limit for EPC usage in the "misc.max" interface file. It reports current
usage in "misc.current", the total EPC memory available in
"misc.capacity", and the number of times EPC usage reached the max limit
in "misc.events".

For now, the EPC cgroup simply blocks additional EPC allocation in
sgx_alloc_epc_page() when the limit is reached. Reclaimable pages are
still tracked in the global active list, only reclaimed by the global
reclaimer when the total free page count is lower than a threshold.

Later patches will reorganize the tracking and reclamation code in the
global reclaimer and implement per-cgroup tracking and reclaiming.

Co-developed-by: Sean Christopherson <sean.j.christopherson@intel.com>
Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com>
Signed-off-by: Kristen Carlson Accardi <kristen@linux.intel.com>
Co-developed-by: Haitao Huang <haitao.huang@linux.intel.com>
Signed-off-by: Haitao Huang <haitao.huang@linux.intel.com>
Reviewed-by: Jarkko Sakkinen <jarkko@kernel.org>
Reviewed-by: Tejun Heo <tj@kernel.org>
Reviewed-by: Kai Huang <kai.huang@intel.com>
Tested-by: Jarkko Sakkinen <jarkko@kernel.org>
---
V16:
- Proper handling for failures during init (Kai)
- Register ops and capacity at the end when SGX is ready to handle
  callbacks.

V15:
- Declare __init for sgx_cgroup_init() (Jarkko)
- Disable SGX when sgx_cgroup_init() fails (Jarkko)

V13:
- Remove unneeded includes. (Kai)

V12:
- Remove CONFIG_CGROUP_SGX_EPC and make sgx cgroup implementation
conditionally compiled with CONFIG_CGROUP_MISC. (Jarkko)

V11:
- Update copyright and format better (Kai)
- Create wrappers to remove #ifdefs in c file. (Kai)
- Remove unneeded comments (Kai)

V10:
- Shorten function, variable, struct names, s/sgx_epc_cgroup/sgx_cgroup. (Jarkko)
- Use enums instead of booleans for the parameters. (Dave, Jarkko)

V8:
- Remove null checks for epc_cg in try_charge()/uncharge(). (Jarkko)
- Remove extra space, '_INTEL'. (Jarkko)

V7:
- Use a static for root cgroup (Kai)
- Wrap epc_cg field in sgx_epc_page struct with #ifdef (Kai)
- Correct check for charge API return (Kai)
- Start initialization in SGX device driver init (Kai)
- Remove unneeded BUG_ON (Kai)
- Split  sgx_get_current_epc_cg() out of sgx_epc_cg_try_charge() (Kai)

V6:
- Split the original large patch"Limit process EPC usage with misc
cgroup controller"  and restructure it (Kai)
---
 arch/x86/kernel/cpu/sgx/Makefile     |  1 +
 arch/x86/kernel/cpu/sgx/epc_cgroup.c | 93 ++++++++++++++++++++++++++++
 arch/x86/kernel/cpu/sgx/epc_cgroup.h | 78 +++++++++++++++++++++++
 arch/x86/kernel/cpu/sgx/main.c       | 44 +++++++++++--
 arch/x86/kernel/cpu/sgx/sgx.h        | 24 +++++++
 include/linux/misc_cgroup.h          |  2 +
 6 files changed, 238 insertions(+), 4 deletions(-)
 create mode 100644 arch/x86/kernel/cpu/sgx/epc_cgroup.c
 create mode 100644 arch/x86/kernel/cpu/sgx/epc_cgroup.h

Comments

Huang, Kai Aug. 27, 2024, 10:21 a.m. UTC | #1
On Tue, 2024-08-20 at 18:53 -0700, Haitao Huang wrote:
> +/**
> + * Register capacity and ops for SGX cgroup.
> + * Only called at the end of sgx_init() when SGX is ready to handle the ops
> + * callbacks.
> + */

Got this warning when building with W=1:

arch/x86/kernel/cpu/sgx/epc_cgroup.c:420: warning: This comment starts with
'/**', but isn't a kernel-doc comment. Refer Documentation/doc-guide/kernel-
doc.rst
 * Register capacity and ops for SGX cgroup.

It should be fixed.

> +void __init sgx_cgroup_register(void)
> +{
> +	unsigned int nid = first_node(sgx_numa_mask);
> +	unsigned int first = nid;
> +	u64 capacity = 0;
> +
> +	misc_cg_set_ops(MISC_CG_RES_SGX_EPC, &sgx_cgroup_ops);
> +
> +	/* sgx_numa_mask is not empty when this is called */
> +	do {
> +		capacity += sgx_numa_nodes[nid].size;
> +		nid = next_node_in(nid, sgx_numa_mask);
> +	} while (nid != first);
> +	misc_cg_set_capacity(MISC_CG_RES_SGX_EPC, capacity);
> +}

Nit (leave to you):

Is sgx_cgroup_enable() better?
Huang, Kai Aug. 27, 2024, 11:11 p.m. UTC | #2
> +static void sgx_cgroup_misc_init(struct misc_cg *cg, struct sgx_cgroup *sgx_cg)
> +{
> +	cg->res[MISC_CG_RES_SGX_EPC].priv = sgx_cg;
> +	sgx_cg->cg = cg;
> +}
> +

[...]

> +int __init sgx_cgroup_init(void)
> +{
> +	sgx_cgroup_misc_init(misc_cg_root(), &sgx_cg_root);
> +
> +	return 0;
> +} > +
> +/**
> + * Register capacity and ops for SGX cgroup.
> + * Only called at the end of sgx_init() when SGX is ready to handle the ops
> + * callbacks.
> + */
> +void __init sgx_cgroup_register(void)
> +{
> +	unsigned int nid = first_node(sgx_numa_mask);
> +	unsigned int first = nid;
> +	u64 capacity = 0;
> +
> +	misc_cg_set_ops(MISC_CG_RES_SGX_EPC, &sgx_cgroup_ops);
> +
> +	/* sgx_numa_mask is not empty when this is called */
> +	do {
> +		capacity += sgx_numa_nodes[nid].size;
> +		nid = next_node_in(nid, sgx_numa_mask);
> +	} while (nid != first);
> +	misc_cg_set_capacity(MISC_CG_RES_SGX_EPC, capacity);
> +}

[...]

>   
> @@ -930,6 +961,9 @@ static int __init sgx_init(void)
>   	if (ret)
>   		goto err_kthread;
>   
> +	ret = sgx_cgroup_init();
> +	if (ret)
> +		goto err_provision;
>   	/*
>   	 * Always try to initialize the native *and* KVM drivers.
>   	 * The KVM driver is less picky than the native one and
> @@ -943,6 +977,8 @@ static int __init sgx_init(void)
>   	if (sgx_vepc_init() && ret)
>   		goto err_provision;

In sgx_cgroup_init():

     sgx_cgroup_misc_init(misc_cg_root(), &sgx_cg_root);

.. also cannot fail.

I think it should be moved to the sgx_cgroup_register().  Otherwise, if 
any step after sgx_cgroup_init() fails, there's no unwind for the above 
operation.

The consequence is the misc_cg_root()->res[EPC].priv will remain 
pointing to the SGX root cgroup.

It shouldn't cause any real issue for now, but it's weird to have that 
set, and can potentially cause problem in the future.

>   
> +	sgx_cgroup_register();
> +
>   	return 0;
>   
>   err_provision:

So, I think we should do:

1) Rename sgx_cgroup_register() -> sgx_cgroup_init(), and move the

	sgx_cgroup_misc_init(misc_cg_root(), &sgx_cg_root);

to it.  All operations in the (new) sgx_cgroup_init() won't fail.

2) Remove (existing) sgx_cgroup_init() form this patch, but introduce it 
in the patch "x86/sgx: Implement async reclamation for cgroup" and 
rename it to sgx_cgroup_prepare() or something.  It just allocates 
workqueue inside.  And sgx_cgroup_deinit() -> sgx_cgroup_cleanup().

Makes sense?
Huang, Kai Aug. 28, 2024, 12:01 a.m. UTC | #3
On 28/08/2024 11:11 am, Huang, Kai wrote:
>> +static void sgx_cgroup_misc_init(struct misc_cg *cg, struct 
>> sgx_cgroup *sgx_cg)
>> +{
>> +    cg->res[MISC_CG_RES_SGX_EPC].priv = sgx_cg;
>> +    sgx_cg->cg = cg;
>> +}
>> +
> 
> [...]
> 
>> +int __init sgx_cgroup_init(void)
>> +{
>> +    sgx_cgroup_misc_init(misc_cg_root(), &sgx_cg_root);
>> +
>> +    return 0;
>> +} > +
>> +/**
>> + * Register capacity and ops for SGX cgroup.
>> + * Only called at the end of sgx_init() when SGX is ready to handle 
>> the ops
>> + * callbacks.
>> + */
>> +void __init sgx_cgroup_register(void)
>> +{
>> +    unsigned int nid = first_node(sgx_numa_mask);
>> +    unsigned int first = nid;
>> +    u64 capacity = 0;
>> +
>> +    misc_cg_set_ops(MISC_CG_RES_SGX_EPC, &sgx_cgroup_ops);
>> +
>> +    /* sgx_numa_mask is not empty when this is called */
>> +    do {
>> +        capacity += sgx_numa_nodes[nid].size;
>> +        nid = next_node_in(nid, sgx_numa_mask);
>> +    } while (nid != first);
>> +    misc_cg_set_capacity(MISC_CG_RES_SGX_EPC, capacity);
>> +}
> 
> [...]
> 
>> @@ -930,6 +961,9 @@ static int __init sgx_init(void)
>>       if (ret)
>>           goto err_kthread;
>> +    ret = sgx_cgroup_init();
>> +    if (ret)
>> +        goto err_provision;
>>       /*
>>        * Always try to initialize the native *and* KVM drivers.
>>        * The KVM driver is less picky than the native one and
>> @@ -943,6 +977,8 @@ static int __init sgx_init(void)
>>       if (sgx_vepc_init() && ret)
>>           goto err_provision;
> 
> In sgx_cgroup_init():
> 
>      sgx_cgroup_misc_init(misc_cg_root(), &sgx_cg_root);
> 
> .. also cannot fail.
> 
> I think it should be moved to the sgx_cgroup_register().  Otherwise, if 
> any step after sgx_cgroup_init() fails, there's no unwind for the above 
> operation.
> 
> The consequence is the misc_cg_root()->res[EPC].priv will remain 
> pointing to the SGX root cgroup.
> 
> It shouldn't cause any real issue for now, but it's weird to have that 
> set, and can potentially cause problem in the future.
> 
>> +    sgx_cgroup_register();
>> +
>>       return 0;
>>   err_provision:
> 
> So, I think we should do:
> 
> 1) Rename sgx_cgroup_register() -> sgx_cgroup_init(), and move the
> 
>      sgx_cgroup_misc_init(misc_cg_root(), &sgx_cg_root);
> 
> to it.  All operations in the (new) sgx_cgroup_init() won't fail.
> 
> 2) Remove (existing) sgx_cgroup_init() form this patch, but introduce it 
> in the patch "x86/sgx: Implement async reclamation for cgroup" and 
> rename it to sgx_cgroup_prepare() or something.  It just allocates 
> workqueue inside.  And sgx_cgroup_deinit() -> sgx_cgroup_cleanup().
> 
> Makes sense?
> 
> 

With the above addressed, and the k-doc warning fixed:

Reviewed-by: Kai Huang <kai.huang@intel.com>
diff mbox series

Patch

diff --git a/arch/x86/kernel/cpu/sgx/Makefile b/arch/x86/kernel/cpu/sgx/Makefile
index 9c1656779b2a..081cb424575e 100644
--- a/arch/x86/kernel/cpu/sgx/Makefile
+++ b/arch/x86/kernel/cpu/sgx/Makefile
@@ -4,3 +4,4 @@  obj-y += \
 	ioctl.o \
 	main.o
 obj-$(CONFIG_X86_SGX_KVM)	+= virt.o
+obj-$(CONFIG_CGROUP_MISC)	+= epc_cgroup.o
diff --git a/arch/x86/kernel/cpu/sgx/epc_cgroup.c b/arch/x86/kernel/cpu/sgx/epc_cgroup.c
new file mode 100644
index 000000000000..0e422fef02bb
--- /dev/null
+++ b/arch/x86/kernel/cpu/sgx/epc_cgroup.c
@@ -0,0 +1,93 @@ 
+// SPDX-License-Identifier: GPL-2.0
+/* Copyright(c) 2022-2024 Intel Corporation. */
+
+#include<linux/slab.h>
+#include "epc_cgroup.h"
+
+/* The root SGX EPC cgroup */
+static struct sgx_cgroup sgx_cg_root;
+
+/**
+ * sgx_cgroup_try_charge() - try to charge cgroup for a single EPC page
+ *
+ * @sgx_cg:	The EPC cgroup to be charged for the page.
+ * Return:
+ * * %0 - If successfully charged.
+ * * -errno - for failures.
+ */
+int sgx_cgroup_try_charge(struct sgx_cgroup *sgx_cg)
+{
+	return misc_cg_try_charge(MISC_CG_RES_SGX_EPC, sgx_cg->cg, PAGE_SIZE);
+}
+
+/**
+ * sgx_cgroup_uncharge() - uncharge a cgroup for an EPC page
+ * @sgx_cg:	The charged sgx cgroup.
+ */
+void sgx_cgroup_uncharge(struct sgx_cgroup *sgx_cg)
+{
+	misc_cg_uncharge(MISC_CG_RES_SGX_EPC, sgx_cg->cg, PAGE_SIZE);
+}
+
+static void sgx_cgroup_free(struct misc_cg *cg)
+{
+	struct sgx_cgroup *sgx_cg;
+
+	sgx_cg = sgx_cgroup_from_misc_cg(cg);
+	if (!sgx_cg)
+		return;
+
+	kfree(sgx_cg);
+}
+
+static void sgx_cgroup_misc_init(struct misc_cg *cg, struct sgx_cgroup *sgx_cg)
+{
+	cg->res[MISC_CG_RES_SGX_EPC].priv = sgx_cg;
+	sgx_cg->cg = cg;
+}
+
+static int sgx_cgroup_alloc(struct misc_cg *cg)
+{
+	struct sgx_cgroup *sgx_cg;
+
+	sgx_cg = kzalloc(sizeof(*sgx_cg), GFP_KERNEL);
+	if (!sgx_cg)
+		return -ENOMEM;
+
+	sgx_cgroup_misc_init(cg, sgx_cg);
+
+	return 0;
+}
+
+const struct misc_res_ops sgx_cgroup_ops = {
+	.alloc = sgx_cgroup_alloc,
+	.free = sgx_cgroup_free,
+};
+
+int __init sgx_cgroup_init(void)
+{
+	sgx_cgroup_misc_init(misc_cg_root(), &sgx_cg_root);
+
+	return 0;
+}
+
+/**
+ * Register capacity and ops for SGX cgroup.
+ * Only called at the end of sgx_init() when SGX is ready to handle the ops
+ * callbacks.
+ */
+void __init sgx_cgroup_register(void)
+{
+	unsigned int nid = first_node(sgx_numa_mask);
+	unsigned int first = nid;
+	u64 capacity = 0;
+
+	misc_cg_set_ops(MISC_CG_RES_SGX_EPC, &sgx_cgroup_ops);
+
+	/* sgx_numa_mask is not empty when this is called */
+	do {
+		capacity += sgx_numa_nodes[nid].size;
+		nid = next_node_in(nid, sgx_numa_mask);
+	} while (nid != first);
+	misc_cg_set_capacity(MISC_CG_RES_SGX_EPC, capacity);
+}
diff --git a/arch/x86/kernel/cpu/sgx/epc_cgroup.h b/arch/x86/kernel/cpu/sgx/epc_cgroup.h
new file mode 100644
index 000000000000..e74b1ea0b642
--- /dev/null
+++ b/arch/x86/kernel/cpu/sgx/epc_cgroup.h
@@ -0,0 +1,78 @@ 
+/* SPDX-License-Identifier: GPL-2.0 */
+#ifndef _SGX_EPC_CGROUP_H_
+#define _SGX_EPC_CGROUP_H_
+
+#include <asm/sgx.h>
+#include <linux/cgroup.h>
+#include <linux/misc_cgroup.h>
+
+#include "sgx.h"
+
+#ifndef CONFIG_CGROUP_MISC
+
+#define MISC_CG_RES_SGX_EPC MISC_CG_RES_TYPES
+struct sgx_cgroup;
+
+static inline struct sgx_cgroup *sgx_get_current_cg(void)
+{
+	return NULL;
+}
+
+static inline void sgx_put_cg(struct sgx_cgroup *sgx_cg) { }
+
+static inline int sgx_cgroup_try_charge(struct sgx_cgroup *sgx_cg)
+{
+	return 0;
+}
+
+static inline void sgx_cgroup_uncharge(struct sgx_cgroup *sgx_cg) { }
+
+static inline int __init sgx_cgroup_init(void)
+{
+	return 0;
+}
+
+static inline void __init sgx_cgroup_register(void) { }
+
+#else /* CONFIG_CGROUP_MISC */
+
+struct sgx_cgroup {
+	struct misc_cg *cg;
+};
+
+static inline struct sgx_cgroup *sgx_cgroup_from_misc_cg(struct misc_cg *cg)
+{
+	return (struct sgx_cgroup *)(cg->res[MISC_CG_RES_SGX_EPC].priv);
+}
+
+/**
+ * sgx_get_current_cg() - get the EPC cgroup of current process.
+ *
+ * Returned cgroup has its ref count increased by 1. Caller must call
+ * sgx_put_cg() to return the reference.
+ *
+ * Return: EPC cgroup to which the current task belongs to.
+ */
+static inline struct sgx_cgroup *sgx_get_current_cg(void)
+{
+	/* get_current_misc_cg() never returns NULL when Kconfig enabled */
+	return sgx_cgroup_from_misc_cg(get_current_misc_cg());
+}
+
+/**
+ * sgx_put_cg() - Put the EPC cgroup and reduce its ref count.
+ * @sgx_cg - EPC cgroup to put.
+ */
+static inline void sgx_put_cg(struct sgx_cgroup *sgx_cg)
+{
+	put_misc_cg(sgx_cg->cg);
+}
+
+int sgx_cgroup_try_charge(struct sgx_cgroup *sgx_cg);
+void sgx_cgroup_uncharge(struct sgx_cgroup *sgx_cg);
+int __init sgx_cgroup_init(void);
+void __init sgx_cgroup_register(void);
+
+#endif /* CONFIG_CGROUP_MISC */
+
+#endif /* _SGX_EPC_CGROUP_H_ */
diff --git a/arch/x86/kernel/cpu/sgx/main.c b/arch/x86/kernel/cpu/sgx/main.c
index e64073fb4256..0fda964c0a7c 100644
--- a/arch/x86/kernel/cpu/sgx/main.c
+++ b/arch/x86/kernel/cpu/sgx/main.c
@@ -18,6 +18,7 @@ 
 #include "driver.h"
 #include "encl.h"
 #include "encls.h"
+#include "epc_cgroup.h"
 
 struct sgx_epc_section sgx_epc_sections[SGX_MAX_EPC_SECTIONS];
 static int sgx_nr_epc_sections;
@@ -35,14 +36,14 @@  static DEFINE_SPINLOCK(sgx_reclaimer_lock);
 static atomic_long_t sgx_nr_free_pages = ATOMIC_LONG_INIT(0);
 
 /* Nodes with one or more EPC sections. */
-static nodemask_t sgx_numa_mask;
+nodemask_t sgx_numa_mask;
 
 /*
  * Array with one list_head for each possible NUMA node.  Each
  * list contains all the sgx_epc_section's which are on that
  * node.
  */
-static struct sgx_numa_node *sgx_numa_nodes;
+struct sgx_numa_node *sgx_numa_nodes;
 
 static LIST_HEAD(sgx_dirty_page_list);
 
@@ -559,7 +560,16 @@  int sgx_unmark_page_reclaimable(struct sgx_epc_page *page)
  */
 struct sgx_epc_page *sgx_alloc_epc_page(void *owner, enum sgx_reclaim reclaim)
 {
+	struct sgx_cgroup *sgx_cg;
 	struct sgx_epc_page *page;
+	int ret;
+
+	sgx_cg = sgx_get_current_cg();
+	ret = sgx_cgroup_try_charge(sgx_cg);
+	if (ret) {
+		sgx_put_cg(sgx_cg);
+		return ERR_PTR(ret);
+	}
 
 	for ( ; ; ) {
 		page = __sgx_alloc_epc_page();
@@ -568,8 +578,10 @@  struct sgx_epc_page *sgx_alloc_epc_page(void *owner, enum sgx_reclaim reclaim)
 			break;
 		}
 
-		if (list_empty(&sgx_active_page_list))
-			return ERR_PTR(-ENOMEM);
+		if (list_empty(&sgx_active_page_list)) {
+			page = ERR_PTR(-ENOMEM);
+			break;
+		}
 
 		if (reclaim == SGX_NO_RECLAIM) {
 			page = ERR_PTR(-EBUSY);
@@ -585,6 +597,15 @@  struct sgx_epc_page *sgx_alloc_epc_page(void *owner, enum sgx_reclaim reclaim)
 		cond_resched();
 	}
 
+	if (!IS_ERR(page)) {
+		WARN_ON_ONCE(sgx_epc_page_get_cgroup(page));
+		/* sgx_put_cg() in sgx_free_epc_page() */
+		sgx_epc_page_set_cgroup(page, sgx_cg);
+	} else {
+		sgx_cgroup_uncharge(sgx_cg);
+		sgx_put_cg(sgx_cg);
+	}
+
 	if (sgx_should_reclaim(SGX_NR_LOW_PAGES))
 		wake_up(&ksgxd_waitq);
 
@@ -603,8 +624,16 @@  struct sgx_epc_page *sgx_alloc_epc_page(void *owner, enum sgx_reclaim reclaim)
 void sgx_free_epc_page(struct sgx_epc_page *page)
 {
 	struct sgx_epc_section *section = &sgx_epc_sections[page->section];
+	struct sgx_cgroup *sgx_cg = sgx_epc_page_get_cgroup(page);
 	struct sgx_numa_node *node = section->node;
 
+	/* sgx_cg could be NULL if called from __sgx_sanitize_pages() */
+	if (sgx_cg) {
+		sgx_cgroup_uncharge(sgx_cg);
+		sgx_put_cg(sgx_cg);
+		sgx_epc_page_set_cgroup(page, NULL);
+	}
+
 	spin_lock(&node->lock);
 
 	page->owner = NULL;
@@ -644,6 +673,8 @@  static bool __init sgx_setup_epc_section(u64 phys_addr, u64 size,
 		section->pages[i].flags = 0;
 		section->pages[i].owner = NULL;
 		section->pages[i].poison = 0;
+		sgx_epc_page_set_cgroup(&section->pages[i], NULL);
+
 		list_add_tail(&section->pages[i].list, &sgx_dirty_page_list);
 	}
 
@@ -930,6 +961,9 @@  static int __init sgx_init(void)
 	if (ret)
 		goto err_kthread;
 
+	ret = sgx_cgroup_init();
+	if (ret)
+		goto err_provision;
 	/*
 	 * Always try to initialize the native *and* KVM drivers.
 	 * The KVM driver is less picky than the native one and
@@ -943,6 +977,8 @@  static int __init sgx_init(void)
 	if (sgx_vepc_init() && ret)
 		goto err_provision;
 
+	sgx_cgroup_register();
+
 	return 0;
 
 err_provision:
diff --git a/arch/x86/kernel/cpu/sgx/sgx.h b/arch/x86/kernel/cpu/sgx/sgx.h
index ca34cd4f58ac..c5208da7c8eb 100644
--- a/arch/x86/kernel/cpu/sgx/sgx.h
+++ b/arch/x86/kernel/cpu/sgx/sgx.h
@@ -39,14 +39,35 @@  enum sgx_reclaim {
 	SGX_DO_RECLAIM
 };
 
+struct sgx_cgroup;
+
 struct sgx_epc_page {
 	unsigned int section;
 	u16 flags;
 	u16 poison;
 	struct sgx_encl_page *owner;
 	struct list_head list;
+#ifdef CONFIG_CGROUP_MISC
+	struct sgx_cgroup *sgx_cg;
+#endif
 };
 
+static inline void sgx_epc_page_set_cgroup(struct sgx_epc_page *page, struct sgx_cgroup *cg)
+{
+#ifdef CONFIG_CGROUP_MISC
+	page->sgx_cg = cg;
+#endif
+}
+
+static inline struct sgx_cgroup *sgx_epc_page_get_cgroup(struct sgx_epc_page *page)
+{
+#ifdef CONFIG_CGROUP_MISC
+	return page->sgx_cg;
+#else
+	return NULL;
+#endif
+}
+
 /*
  * Contains the tracking data for NUMA nodes having EPC pages. Most importantly,
  * the free page list local to the node is stored here.
@@ -58,6 +79,9 @@  struct sgx_numa_node {
 	spinlock_t lock;
 };
 
+extern nodemask_t sgx_numa_mask;
+extern struct sgx_numa_node *sgx_numa_nodes;
+
 /*
  * The firmware can define multiple chunks of EPC to the different areas of the
  * physical memory e.g. for memory areas of the each node. This structure is
diff --git a/include/linux/misc_cgroup.h b/include/linux/misc_cgroup.h
index b4119869b0d1..df88e1ff9877 100644
--- a/include/linux/misc_cgroup.h
+++ b/include/linux/misc_cgroup.h
@@ -48,6 +48,7 @@  struct misc_res_ops {
  * @watermark: Historical maximum usage of the resource.
  * @usage: Current usage of the resource.
  * @events: Number of times, the resource limit exceeded.
+ * @priv: resource specific data.
  */
 struct misc_res {
 	u64 max;
@@ -55,6 +56,7 @@  struct misc_res {
 	atomic64_t usage;
 	atomic64_t events;
 	atomic64_t events_local;
+	void *priv;
 };
 
 /**