diff mbox series

[v2] x86/sgx: Fix free_cnt counting logic in epc section

Message ID 20210120035320.19709-1-tianjia.zhang@linux.alibaba.com (mailing list archive)
State New, archived
Headers show
Series [v2] x86/sgx: Fix free_cnt counting logic in epc section | expand

Commit Message

tianjia.zhang Jan. 20, 2021, 3:53 a.m. UTC
Increase `section->free_cnt` in sgx_sanitize_section() is more
reasonable, which is called in ksgxd kernel thread, instead of
assigning it to epc section pages number at initialization.
Although this is unlikely to fail, these pages cannot be
allocated after initialization, and which need to be reset
by ksgxd.

At the same time, taking section->lock could be moved inside
the !ret flow so that EREMOVE is done without holding the lock.
it's theoretically possible that ksgxd hasn't finished
sanitizing the EPC when userspace starts creating enclaves.

Reported-by: Jia Zhang <zhang.jia@linux.alibaba.com>
Suggested-by: Sean Christopherson <seanjc@google.com>
Reviewed-by: Sean Christopherson <seanjc@google.com>
Signed-off-by: Tianjia Zhang <tianjia.zhang@linux.alibaba.com>
---
 arch/x86/kernel/cpu/sgx/main.c | 13 +++++++------
 1 file changed, 7 insertions(+), 6 deletions(-)

Comments

Sean Christopherson Jan. 20, 2021, 4:27 p.m. UTC | #1
On Wed, Jan 20, 2021, Tianjia Zhang wrote:
> Increase `section->free_cnt` in sgx_sanitize_section() is more
> reasonable, which is called in ksgxd kernel thread, instead of
> assigning it to epc section pages number at initialization.
> Although this is unlikely to fail, these pages cannot be
> allocated after initialization, and which need to be reset
> by ksgxd.
> 
> At the same time, taking section->lock could be moved inside
> the !ret flow so that EREMOVE is done without holding the lock.
> it's theoretically possible that ksgxd hasn't finished
> sanitizing the EPC when userspace starts creating enclaves.

Moving the lock should be in a separate patch, they are clearly two different
functional changes.

> Reported-by: Jia Zhang <zhang.jia@linux.alibaba.com>
> Suggested-by: Sean Christopherson <seanjc@google.com>

Moving lock was suggested by me, the original patch was not.

> Reviewed-by: Sean Christopherson <seanjc@google.com>
> Signed-off-by: Tianjia Zhang <tianjia.zhang@linux.alibaba.com>
> ---
>  arch/x86/kernel/cpu/sgx/main.c | 13 +++++++------
>  1 file changed, 7 insertions(+), 6 deletions(-)
> 
> diff --git a/arch/x86/kernel/cpu/sgx/main.c b/arch/x86/kernel/cpu/sgx/main.c
> index c519fc5f6948..34a72a147983 100644
> --- a/arch/x86/kernel/cpu/sgx/main.c
> +++ b/arch/x86/kernel/cpu/sgx/main.c
> @@ -41,16 +41,18 @@ static void sgx_sanitize_section(struct sgx_epc_section *section)
>  		if (kthread_should_stop())
>  			return;
>  
> -		/* needed for access to ->page_list: */
> -		spin_lock(&section->lock);
> -
>  		page = list_first_entry(&section->init_laundry_list,
>  					struct sgx_epc_page, list);
>  
>  		ret = __eremove(sgx_get_epc_virt_addr(page));
> -		if (!ret)
> +
> +		/* needed for access to ->page_list: */
> +		spin_lock(&section->lock);

This can actually be even more precise, as the lock doesn't need to be taken
if __eremove() fails.  The lock protects section->page_list, not page->list.
At that point, the comment about why the lock is needed can probably be dropped?

> +
> +		if (!ret) {
>  			list_move(&page->list, &section->page_list);
> -		else
> +			section->free_cnt += 1;

Belated feedback, this can use "++".

> +		} else

Need curly braces here.

E.g. when all is said and done, this code can be:

		if (!ret) {
			spin_lock(&section->lock);
			list_move(&page->list, &section->page_list);
			section->free_cnt++;
			spin_unlock(&section->lock);
		} else {
			list_move_tail(&page->list, &dirty);
		}

>  			list_move_tail(&page->list, &dirty);
>  
>  		spin_unlock(&section->lock);
> @@ -646,7 +648,6 @@ static bool __init sgx_setup_epc_section(u64 phys_addr, u64 size,
>  		list_add_tail(&section->pages[i].list, &section->init_laundry_list);
>  	}
>  
> -	section->free_cnt = nr_pages;
>  	return true;
>  }
>  
> -- 
> 2.19.1.3.ge56e4f7
>
Jarkko Sakkinen Jan. 20, 2021, 10:31 p.m. UTC | #2
On Wed, Jan 20, 2021 at 11:53:20AM +0800, Tianjia Zhang wrote:
> Increase `section->free_cnt` in sgx_sanitize_section() is more
> reasonable, which is called in ksgxd kernel thread, instead of

This is lacking reasoning of why.

/Jarkko

> assigning it to epc section pages number at initialization.
> Although this is unlikely to fail, these pages cannot be
> allocated after initialization, and which need to be reset
> by ksgxd.
> 
> At the same time, taking section->lock could be moved inside
> the !ret flow so that EREMOVE is done without holding the lock.
> it's theoretically possible that ksgxd hasn't finished
> sanitizing the EPC when userspace starts creating enclaves.
> 
> Reported-by: Jia Zhang <zhang.jia@linux.alibaba.com>
> Suggested-by: Sean Christopherson <seanjc@google.com>
> Reviewed-by: Sean Christopherson <seanjc@google.com>
> Signed-off-by: Tianjia Zhang <tianjia.zhang@linux.alibaba.com>
> ---
>  arch/x86/kernel/cpu/sgx/main.c | 13 +++++++------
>  1 file changed, 7 insertions(+), 6 deletions(-)
> 
> diff --git a/arch/x86/kernel/cpu/sgx/main.c b/arch/x86/kernel/cpu/sgx/main.c
> index c519fc5f6948..34a72a147983 100644
> --- a/arch/x86/kernel/cpu/sgx/main.c
> +++ b/arch/x86/kernel/cpu/sgx/main.c
> @@ -41,16 +41,18 @@ static void sgx_sanitize_section(struct sgx_epc_section *section)
>  		if (kthread_should_stop())
>  			return;
>  
> -		/* needed for access to ->page_list: */
> -		spin_lock(&section->lock);
> -
>  		page = list_first_entry(&section->init_laundry_list,
>  					struct sgx_epc_page, list);
>  
>  		ret = __eremove(sgx_get_epc_virt_addr(page));
> -		if (!ret)
> +
> +		/* needed for access to ->page_list: */
> +		spin_lock(&section->lock);
> +
> +		if (!ret) {
>  			list_move(&page->list, &section->page_list);
> -		else
> +			section->free_cnt += 1;
> +		} else
>  			list_move_tail(&page->list, &dirty);
>  
>  		spin_unlock(&section->lock);
> @@ -646,7 +648,6 @@ static bool __init sgx_setup_epc_section(u64 phys_addr, u64 size,
>  		list_add_tail(&section->pages[i].list, &section->init_laundry_list);
>  	}
>  
> -	section->free_cnt = nr_pages;
>  	return true;
>  }
>  
> -- 
> 2.19.1.3.ge56e4f7
> 
>
diff mbox series

Patch

diff --git a/arch/x86/kernel/cpu/sgx/main.c b/arch/x86/kernel/cpu/sgx/main.c
index c519fc5f6948..34a72a147983 100644
--- a/arch/x86/kernel/cpu/sgx/main.c
+++ b/arch/x86/kernel/cpu/sgx/main.c
@@ -41,16 +41,18 @@  static void sgx_sanitize_section(struct sgx_epc_section *section)
 		if (kthread_should_stop())
 			return;
 
-		/* needed for access to ->page_list: */
-		spin_lock(&section->lock);
-
 		page = list_first_entry(&section->init_laundry_list,
 					struct sgx_epc_page, list);
 
 		ret = __eremove(sgx_get_epc_virt_addr(page));
-		if (!ret)
+
+		/* needed for access to ->page_list: */
+		spin_lock(&section->lock);
+
+		if (!ret) {
 			list_move(&page->list, &section->page_list);
-		else
+			section->free_cnt += 1;
+		} else
 			list_move_tail(&page->list, &dirty);
 
 		spin_unlock(&section->lock);
@@ -646,7 +648,6 @@  static bool __init sgx_setup_epc_section(u64 phys_addr, u64 size,
 		list_add_tail(&section->pages[i].list, &section->init_laundry_list);
 	}
 
-	section->free_cnt = nr_pages;
 	return true;
 }