diff mbox series

[for_v23,1/3] x86/sgx: Update the free page count in a single operation

Message ID 20191022224922.28144-2-sean.j.christopherson@intel.com (mailing list archive)
State New, archived
Headers show
Series x86/sgx: More cleanup for v23 | expand

Commit Message

Sean Christopherson Oct. 22, 2019, 10:49 p.m. UTC
Use atomic_add() instead of running atomic_inc() in a loop to manually
do the equivalent addition.

Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com>
---
 arch/x86/kernel/cpu/sgx/main.c | 3 +--
 1 file changed, 1 insertion(+), 2 deletions(-)

Comments

Jarkko Sakkinen Oct. 23, 2019, 12:44 p.m. UTC | #1
On Tue, Oct 22, 2019 at 03:49:20PM -0700, Sean Christopherson wrote:
> Use atomic_add() instead of running atomic_inc() in a loop to manually
> do the equivalent addition.
> 
> Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com>
> ---
>  arch/x86/kernel/cpu/sgx/main.c | 3 +--
>  1 file changed, 1 insertion(+), 2 deletions(-)
> 
> diff --git a/arch/x86/kernel/cpu/sgx/main.c b/arch/x86/kernel/cpu/sgx/main.c
> index 499a9b0740c8..d45bf6fca0c8 100644
> --- a/arch/x86/kernel/cpu/sgx/main.c
> +++ b/arch/x86/kernel/cpu/sgx/main.c
> @@ -195,8 +195,7 @@ static bool __init sgx_alloc_epc_section(u64 addr, u64 size,
>  		list_add_tail(&page->list, &section->unsanitized_page_list);
>  	}
>  
> -	for (i = 0; i < nr_pages; i++)
> -		atomic_inc(&sgx_nr_free_pages);
> +	atomic_add(nr_pages, &sgx_nr_free_pages);
>  
>  	return true;
>  
> -- 
> 2.22.0
> 

There reason I used atomic_inc() was that atomic_add() takes int that
could potentially overflow.

I'll ignore this as I'll do the revert that I promised to do.

/Jarkko
Jarkko Sakkinen Oct. 24, 2019, 1:11 p.m. UTC | #2
On Wed, Oct 23, 2019 at 03:44:49PM +0300, Jarkko Sakkinen wrote:
> On Tue, Oct 22, 2019 at 03:49:20PM -0700, Sean Christopherson wrote:
> > Use atomic_add() instead of running atomic_inc() in a loop to manually
> > do the equivalent addition.
> > 
> > Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com>
> > ---
> >  arch/x86/kernel/cpu/sgx/main.c | 3 +--
> >  1 file changed, 1 insertion(+), 2 deletions(-)
> > 
> > diff --git a/arch/x86/kernel/cpu/sgx/main.c b/arch/x86/kernel/cpu/sgx/main.c
> > index 499a9b0740c8..d45bf6fca0c8 100644
> > --- a/arch/x86/kernel/cpu/sgx/main.c
> > +++ b/arch/x86/kernel/cpu/sgx/main.c
> > @@ -195,8 +195,7 @@ static bool __init sgx_alloc_epc_section(u64 addr, u64 size,
> >  		list_add_tail(&page->list, &section->unsanitized_page_list);
> >  	}
> >  
> > -	for (i = 0; i < nr_pages; i++)
> > -		atomic_inc(&sgx_nr_free_pages);
> > +	atomic_add(nr_pages, &sgx_nr_free_pages);
> >  
> >  	return true;
> >  
> > -- 
> > 2.22.0
> > 
> 
> There reason I used atomic_inc() was that atomic_add() takes int that
> could potentially overflow.
> 
> I'll ignore this as I'll do the revert that I promised to do.

static inline unsigned long sgx_nr_free_pages(void)
{
	unsigned long cnt = 0;
	int i;

	for (i = 0; i < sgx_nr_epc_sections; i++)
		cnt += sgx_epc_sections[i].free_cnt;

	return cnt;
}

static inline bool sgx_should_reclaim(unsigned long watermark)
{
	return sgx_nr_free_pages() < watermark &&
	       !list_empty(&sgx_active_page_list);
}

I use the latter in all call sites.

/Jarkko
diff mbox series

Patch

diff --git a/arch/x86/kernel/cpu/sgx/main.c b/arch/x86/kernel/cpu/sgx/main.c
index 499a9b0740c8..d45bf6fca0c8 100644
--- a/arch/x86/kernel/cpu/sgx/main.c
+++ b/arch/x86/kernel/cpu/sgx/main.c
@@ -195,8 +195,7 @@  static bool __init sgx_alloc_epc_section(u64 addr, u64 size,
 		list_add_tail(&page->list, &section->unsanitized_page_list);
 	}
 
-	for (i = 0; i < nr_pages; i++)
-		atomic_inc(&sgx_nr_free_pages);
+	atomic_add(nr_pages, &sgx_nr_free_pages);
 
 	return true;