diff mbox

[v6,08/12] s390/mm: Clear skeys for newly mapped huge guest pmds

Message ID 20180713063702.54628-9-frankja@linux.ibm.com (mailing list archive)
State New, archived
Headers show

Commit Message

Janosch Frank July 13, 2018, 6:36 a.m. UTC
Similarly to the pte skey handling, where we set the storage key to
the default key for each newly mapped pte, we have to also do that for
huge pmds.

Signed-off-by: Janosch Frank <frankja@linux.ibm.com>
---
 arch/s390/mm/pgtable.c | 26 ++++++++++++++++++++++++--
 1 file changed, 24 insertions(+), 2 deletions(-)

Comments

David Hildenbrand July 13, 2018, 8:53 a.m. UTC | #1
On 13.07.2018 08:36, Janosch Frank wrote:
> Similarly to the pte skey handling, where we set the storage key to
> the default key for each newly mapped pte, we have to also do that for
> huge pmds.
> 
> Signed-off-by: Janosch Frank <frankja@linux.ibm.com>
> ---
>  arch/s390/mm/pgtable.c | 26 ++++++++++++++++++++++++--
>  1 file changed, 24 insertions(+), 2 deletions(-)
> 
> diff --git a/arch/s390/mm/pgtable.c b/arch/s390/mm/pgtable.c
> index c393a6b0f362..6550ed56fcdf 100644
> --- a/arch/s390/mm/pgtable.c
> +++ b/arch/s390/mm/pgtable.c
> @@ -410,14 +410,34 @@ static inline pmd_t pmdp_flush_lazy(struct mm_struct *mm,
>  	return old;
>  }
>  
> +static void pmdp_clear_skeys(struct mm_struct *mm, pmd_t *pmdp, pmd_t new)
> +{
> +	unsigned long paddr = pmd_val(new) & HPAGE_MASK;
> +
> +	/*
> +	 * After a guest has used the first storage key instruction,
> +	 * we must ensure, that each newly mapped huge pmd has all
> +	 * skeys cleared before mapping it.
> +	 */
> +	if (!mm_uses_skeys(mm) ||
> +	    !pmd_none(*pmdp) ||
> +	    !pmd_large(new) ||
> +	    (pmd_val(new) & _SEGMENT_ENTRY_INVALID))
> +		return;
> +
> +	__storage_key_init_range(paddr, paddr + HPAGE_SIZE - 1);

I am by far not an expert on this :)

I wonder if it could happen that we zero-over already used keys (e.g.
changing protection of a pmd entry ?). I hope Martin can review this one.

> +}
> +
>  pmd_t pmdp_xchg_direct(struct mm_struct *mm, unsigned long addr,
>  		       pmd_t *pmdp, pmd_t new)
>  {
>  	pmd_t old;
>  
>  	preempt_disable();
> -	if (mm_has_pgste(mm))
> +	if (mm_has_pgste(mm)) {
> +		pmdp_clear_skeys(mm, pmdp, new);
>  		pmdp_notify(mm, addr);

I wonder if mm_has_pgste(mm) is the right to check for here. AFAICS,
s390_enable_skey() does not really rely on PGSTE to be around
("theoretically" in contrast to pgste_set_key()).

Shouldn't this rather be

if (mm_uses_skeys(mm))
	pmdp_clear_skeys(mm, pmdp, new);

(so you can remove the inner check). So you'd even have one check less.

> +	}
>  	old = pmdp_flush_direct(mm, addr, pmdp);
>  	*pmdp = new;
>  	preempt_enable();
> @@ -431,8 +451,10 @@ pmd_t pmdp_xchg_lazy(struct mm_struct *mm, unsigned long addr,
>  	pmd_t old;
>  
>  	preempt_disable();
> -	if (mm_has_pgste(mm))
> +	if (mm_has_pgste(mm)) {
> +		pmdp_clear_skeys(mm, pmdp, new);
>  		pmdp_notify(mm, addr);
> +	}
>  	old = pmdp_flush_lazy(mm, addr, pmdp);
>  	*pmdp = new;
>  	preempt_enable();
>
Janosch Frank July 13, 2018, 10:09 a.m. UTC | #2
On 13.07.2018 10:53, David Hildenbrand wrote:
> On 13.07.2018 08:36, Janosch Frank wrote:
>> Similarly to the pte skey handling, where we set the storage key to
>> the default key for each newly mapped pte, we have to also do that for
>> huge pmds.
>>
>> Signed-off-by: Janosch Frank <frankja@linux.ibm.com>
>> ---
>>  arch/s390/mm/pgtable.c | 26 ++++++++++++++++++++++++--
>>  1 file changed, 24 insertions(+), 2 deletions(-)
>>
>> diff --git a/arch/s390/mm/pgtable.c b/arch/s390/mm/pgtable.c
>> index c393a6b0f362..6550ed56fcdf 100644
>> --- a/arch/s390/mm/pgtable.c
>> +++ b/arch/s390/mm/pgtable.c
>> @@ -410,14 +410,34 @@ static inline pmd_t pmdp_flush_lazy(struct mm_struct *mm,
>>  	return old;
>>  }
>>  
>> +static void pmdp_clear_skeys(struct mm_struct *mm, pmd_t *pmdp, pmd_t new)
>> +{
>> +	unsigned long paddr = pmd_val(new) & HPAGE_MASK;
>> +
>> +	/*
>> +	 * After a guest has used the first storage key instruction,
>> +	 * we must ensure, that each newly mapped huge pmd has all
>> +	 * skeys cleared before mapping it.
>> +	 */
>> +	if (!mm_uses_skeys(mm) ||
>> +	    !pmd_none(*pmdp) ||
>> +	    !pmd_large(new) ||
>> +	    (pmd_val(new) & _SEGMENT_ENTRY_INVALID))
>> +		return;
>> +
>> +	__storage_key_init_range(paddr, paddr + HPAGE_SIZE - 1);
> 
> I am by far not an expert on this :)
> 
> I wonder if it could happen that we zero-over already used keys (e.g.
> changing protection of a pmd entry ?). I hope Martin can review this one.

Thats why the old entry has to be pmd_none.
But yes, I already pre-warned Martin.

> 
>> +}
>> +
>>  pmd_t pmdp_xchg_direct(struct mm_struct *mm, unsigned long addr,
>>  		       pmd_t *pmdp, pmd_t new)
>>  {
>>  	pmd_t old;
>>  
>>  	preempt_disable();
>> -	if (mm_has_pgste(mm))
>> +	if (mm_has_pgste(mm)) {
>> +		pmdp_clear_skeys(mm, pmdp, new);
>>  		pmdp_notify(mm, addr);
> 
> I wonder if mm_has_pgste(mm) is the right to check for here. AFAICS,
> s390_enable_skey() does not really rely on PGSTE to be around
> ("theoretically" in contrast to pgste_set_key()).
> 
> Shouldn't this rather be

mm_has_pgste is unfortunately something of a combination of mm_has_guest
and mm_allocates_4k_tables. I already looked into making that clear, but
it's a tangled mess right now considering ucontrol, the TIF bit and the
sysctl.

We have to test against the pgstes, as we absolutely do not want to take
the performance impact on non-guest systems (sysctl) or non-guest
processes (TIF).

> 
> if (mm_uses_skeys(mm))
> 	pmdp_clear_skeys(mm, pmdp, new);
> 
> (so you can remove the inner check). So you'd even have one check less.
> 
>> +	}
>>  	old = pmdp_flush_direct(mm, addr, pmdp);
>>  	*pmdp = new;
>>  	preempt_enable();
>> @@ -431,8 +451,10 @@ pmd_t pmdp_xchg_lazy(struct mm_struct *mm, unsigned long addr,
>>  	pmd_t old;
>>  
>>  	preempt_disable();
>> -	if (mm_has_pgste(mm))
>> +	if (mm_has_pgste(mm)) {
>> +		pmdp_clear_skeys(mm, pmdp, new);
>>  		pmdp_notify(mm, addr);
>> +	}
>>  	old = pmdp_flush_lazy(mm, addr, pmdp);
>>  	*pmdp = new;
>>  	preempt_enable();
>>
> 
>
David Hildenbrand July 13, 2018, 11:58 a.m. UTC | #3
On 13.07.2018 12:09, Janosch Frank wrote:
> On 13.07.2018 10:53, David Hildenbrand wrote:
>> On 13.07.2018 08:36, Janosch Frank wrote:
>>> Similarly to the pte skey handling, where we set the storage key to
>>> the default key for each newly mapped pte, we have to also do that for
>>> huge pmds.
>>>
>>> Signed-off-by: Janosch Frank <frankja@linux.ibm.com>
>>> ---
>>>  arch/s390/mm/pgtable.c | 26 ++++++++++++++++++++++++--
>>>  1 file changed, 24 insertions(+), 2 deletions(-)
>>>
>>> diff --git a/arch/s390/mm/pgtable.c b/arch/s390/mm/pgtable.c
>>> index c393a6b0f362..6550ed56fcdf 100644
>>> --- a/arch/s390/mm/pgtable.c
>>> +++ b/arch/s390/mm/pgtable.c
>>> @@ -410,14 +410,34 @@ static inline pmd_t pmdp_flush_lazy(struct mm_struct *mm,
>>>  	return old;
>>>  }
>>>  
>>> +static void pmdp_clear_skeys(struct mm_struct *mm, pmd_t *pmdp, pmd_t new)
>>> +{
>>> +	unsigned long paddr = pmd_val(new) & HPAGE_MASK;
>>> +
>>> +	/*
>>> +	 * After a guest has used the first storage key instruction,
>>> +	 * we must ensure, that each newly mapped huge pmd has all
>>> +	 * skeys cleared before mapping it.
>>> +	 */
>>> +	if (!mm_uses_skeys(mm) ||
>>> +	    !pmd_none(*pmdp) ||
>>> +	    !pmd_large(new) ||
>>> +	    (pmd_val(new) & _SEGMENT_ENTRY_INVALID))
>>> +		return;
>>> +
>>> +	__storage_key_init_range(paddr, paddr + HPAGE_SIZE - 1);
>>
>> I am by far not an expert on this :)
>>
>> I wonder if it could happen that we zero-over already used keys (e.g.
>> changing protection of a pmd entry ?). I hope Martin can review this one.
> 
> Thats why the old entry has to be pmd_none.
> But yes, I already pre-warned Martin.

So this should also hold for THP I assume? (replacing page tables by PMD
and the other way around).

> 
>>
>>> +}
>>> +
>>>  pmd_t pmdp_xchg_direct(struct mm_struct *mm, unsigned long addr,
>>>  		       pmd_t *pmdp, pmd_t new)
>>>  {
>>>  	pmd_t old;
>>>  
>>>  	preempt_disable();
>>> -	if (mm_has_pgste(mm))
>>> +	if (mm_has_pgste(mm)) {
>>> +		pmdp_clear_skeys(mm, pmdp, new);
>>>  		pmdp_notify(mm, addr);
>>
>> I wonder if mm_has_pgste(mm) is the right to check for here. AFAICS,
>> s390_enable_skey() does not really rely on PGSTE to be around
>> ("theoretically" in contrast to pgste_set_key()).
>>
>> Shouldn't this rather be
> 
> mm_has_pgste is unfortunately something of a combination of mm_has_guest
> and mm_allocates_4k_tables. I already looked into making that clear, but
> it's a tangled mess right now considering ucontrol, the TIF bit and the
> sysctl.
> 
> We have to test against the pgstes, as we absolutely do not want to take
> the performance impact on non-guest systems (sysctl) or non-guest
> processes (TIF).

Agreed that that should be cleaned up in the future.
Janosch Frank July 13, 2018, 12:05 p.m. UTC | #4
On 13.07.2018 13:58, David Hildenbrand wrote:
> On 13.07.2018 12:09, Janosch Frank wrote:
>> On 13.07.2018 10:53, David Hildenbrand wrote:
>>> On 13.07.2018 08:36, Janosch Frank wrote:
>>>> Similarly to the pte skey handling, where we set the storage key to
>>>> the default key for each newly mapped pte, we have to also do that for
>>>> huge pmds.
>>>>
>>>> Signed-off-by: Janosch Frank <frankja@linux.ibm.com>
>>>> ---
>>>>  arch/s390/mm/pgtable.c | 26 ++++++++++++++++++++++++--
>>>>  1 file changed, 24 insertions(+), 2 deletions(-)
>>>>
>>>> diff --git a/arch/s390/mm/pgtable.c b/arch/s390/mm/pgtable.c
>>>> index c393a6b0f362..6550ed56fcdf 100644
>>>> --- a/arch/s390/mm/pgtable.c
>>>> +++ b/arch/s390/mm/pgtable.c
>>>> @@ -410,14 +410,34 @@ static inline pmd_t pmdp_flush_lazy(struct mm_struct *mm,
>>>>  	return old;
>>>>  }
>>>>  
>>>> +static void pmdp_clear_skeys(struct mm_struct *mm, pmd_t *pmdp, pmd_t new)
>>>> +{
>>>> +	unsigned long paddr = pmd_val(new) & HPAGE_MASK;
>>>> +
>>>> +	/*
>>>> +	 * After a guest has used the first storage key instruction,
>>>> +	 * we must ensure, that each newly mapped huge pmd has all
>>>> +	 * skeys cleared before mapping it.
>>>> +	 */
>>>> +	if (!mm_uses_skeys(mm) ||
>>>> +	    !pmd_none(*pmdp) ||
>>>> +	    !pmd_large(new) ||
>>>> +	    (pmd_val(new) & _SEGMENT_ENTRY_INVALID))
>>>> +		return;
>>>> +
>>>> +	__storage_key_init_range(paddr, paddr + HPAGE_SIZE - 1);
>>>
>>> I am by far not an expert on this :)
>>>
>>> I wonder if it could happen that we zero-over already used keys (e.g.
>>> changing protection of a pmd entry ?). I hope Martin can review this one.
>>
>> Thats why the old entry has to be pmd_none.
>> But yes, I already pre-warned Martin.
> 
> So this should also hold for THP I assume? (replacing page tables by PMD
> and the other way around).

It might not catch a merge of a page table where not all ptes are set to
!pte_none but that depends on the thp implementation..

> 
>>
>>>
>>>> +}
>>>> +
>>>>  pmd_t pmdp_xchg_direct(struct mm_struct *mm, unsigned long addr,
>>>>  		       pmd_t *pmdp, pmd_t new)
>>>>  {
>>>>  	pmd_t old;
>>>>  
>>>>  	preempt_disable();
>>>> -	if (mm_has_pgste(mm))
>>>> +	if (mm_has_pgste(mm)) {
>>>> +		pmdp_clear_skeys(mm, pmdp, new);
>>>>  		pmdp_notify(mm, addr);
>>>
>>> I wonder if mm_has_pgste(mm) is the right to check for here. AFAICS,
>>> s390_enable_skey() does not really rely on PGSTE to be around
>>> ("theoretically" in contrast to pgste_set_key()).
>>>
>>> Shouldn't this rather be
>>
>> mm_has_pgste is unfortunately something of a combination of mm_has_guest
>> and mm_allocates_4k_tables. I already looked into making that clear, but
>> it's a tangled mess right now considering ucontrol, the TIF bit and the
>> sysctl.
>>
>> We have to test against the pgstes, as we absolutely do not want to take
>> the performance impact on non-guest systems (sysctl) or non-guest
>> processes (TIF).
> 
> Agreed that that should be cleaned up in the future.
> 
>
diff mbox

Patch

diff --git a/arch/s390/mm/pgtable.c b/arch/s390/mm/pgtable.c
index c393a6b0f362..6550ed56fcdf 100644
--- a/arch/s390/mm/pgtable.c
+++ b/arch/s390/mm/pgtable.c
@@ -410,14 +410,34 @@  static inline pmd_t pmdp_flush_lazy(struct mm_struct *mm,
 	return old;
 }
 
+static void pmdp_clear_skeys(struct mm_struct *mm, pmd_t *pmdp, pmd_t new)
+{
+	unsigned long paddr = pmd_val(new) & HPAGE_MASK;
+
+	/*
+	 * After a guest has used the first storage key instruction,
+	 * we must ensure, that each newly mapped huge pmd has all
+	 * skeys cleared before mapping it.
+	 */
+	if (!mm_uses_skeys(mm) ||
+	    !pmd_none(*pmdp) ||
+	    !pmd_large(new) ||
+	    (pmd_val(new) & _SEGMENT_ENTRY_INVALID))
+		return;
+
+	__storage_key_init_range(paddr, paddr + HPAGE_SIZE - 1);
+}
+
 pmd_t pmdp_xchg_direct(struct mm_struct *mm, unsigned long addr,
 		       pmd_t *pmdp, pmd_t new)
 {
 	pmd_t old;
 
 	preempt_disable();
-	if (mm_has_pgste(mm))
+	if (mm_has_pgste(mm)) {
+		pmdp_clear_skeys(mm, pmdp, new);
 		pmdp_notify(mm, addr);
+	}
 	old = pmdp_flush_direct(mm, addr, pmdp);
 	*pmdp = new;
 	preempt_enable();
@@ -431,8 +451,10 @@  pmd_t pmdp_xchg_lazy(struct mm_struct *mm, unsigned long addr,
 	pmd_t old;
 
 	preempt_disable();
-	if (mm_has_pgste(mm))
+	if (mm_has_pgste(mm)) {
+		pmdp_clear_skeys(mm, pmdp, new);
 		pmdp_notify(mm, addr);
+	}
 	old = pmdp_flush_lazy(mm, addr, pmdp);
 	*pmdp = new;
 	preempt_enable();