Message ID | 20200204175913.74901-6-avagin@gmail.com (mailing list archive) |
---|---|
State | New, archived |
Headers | show |
Series | arm64: add the time namespace support | expand |
Hi Andrei, On 04/02/2020 17:59, Andrei Vagin wrote: > Forbid splitting VVAR VMA resulting in a stricter ABI and reducing the > amount of corner-cases to consider while working further on VDSO time > namespace support. > > As the offset from timens to VVAR page is computed compile-time, the pages > in VVAR should stay together and not being partically mremap()'ed. > I agree on the concept, but why do we need to redefine mremap? special_mapping_mremap() (mm/mmap.c +3317) seems doing already the same thing if we leave mremap == NULL as is. > Signed-off-by: Andrei Vagin <avagin@gmail.com> > --- > arch/arm64/kernel/vdso.c | 13 +++++++++++++ > 1 file changed, 13 insertions(+) > > diff --git a/arch/arm64/kernel/vdso.c b/arch/arm64/kernel/vdso.c > index 2e553468b183..e6ebdc184c1e 100644 > --- a/arch/arm64/kernel/vdso.c > +++ b/arch/arm64/kernel/vdso.c > @@ -229,6 +229,17 @@ static vm_fault_t vvar_fault(const struct vm_special_mapping *sm, > return vmf_insert_pfn(vma, vmf->address, pfn); > } > > +static int vvar_mremap(const struct vm_special_mapping *sm, > + struct vm_area_struct *new_vma) > +{ > + unsigned long new_size = new_vma->vm_end - new_vma->vm_start; > + > + if (new_size != VVAR_NR_PAGES * PAGE_SIZE) > + return -EINVAL; > + > + return 0; > +} > + > static int __setup_additional_pages(enum arch_vdso_type arch_index, > struct mm_struct *mm, > struct linux_binprm *bprm, > @@ -311,6 +322,7 @@ static struct vm_special_mapping aarch32_vdso_spec[C_PAGES] = { > { > .name = "[vvar]", > .fault = vvar_fault, > + .mremap = vvar_mremap, > }, > { > .name = "[vdso]", > @@ -493,6 +505,7 @@ static struct vm_special_mapping vdso_spec[A_PAGES] __ro_after_init = { > { > .name = "[vvar]", > .fault = vvar_fault, > + .mremap = vvar_mremap, > }, > { > .name = "[vdso]", >
On Thu, Feb 20, 2020 at 12:22:52PM +0000, Vincenzo Frascino wrote: > Hi Andrei, > > On 04/02/2020 17:59, Andrei Vagin wrote: > > Forbid splitting VVAR VMA resulting in a stricter ABI and reducing the > > amount of corner-cases to consider while working further on VDSO time > > namespace support. > > > > As the offset from timens to VVAR page is computed compile-time, the pages > > in VVAR should stay together and not being partically mremap()'ed. > > > > I agree on the concept, but why do we need to redefine mremap? > special_mapping_mremap() (mm/mmap.c +3317) seems doing already the same thing if > we leave mremap == NULL as is. > Hmmm. I have read the code of special_mapping_mremap() and I don't see where it restricts splitting the vvar mapping. Here is the code what I see in the source: static int special_mapping_mremap(struct vm_area_struct *new_vma) { struct vm_special_mapping *sm = new_vma->vm_private_data; if (WARN_ON_ONCE(current->mm != new_vma->vm_mm)) return -EFAULT; if (sm->mremap) return sm->mremap(sm, new_vma); return 0; } And I have checked that without this patch, I can remap only one page of the vvar mapping. Thanks, Andrei
Hi Andrei, On 2/23/20 11:30 PM, Andrei Vagin wrote: [...] > > Hmmm. I have read the code of special_mapping_mremap() and I don't see where > it restricts splitting the vvar mapping. > > Here is the code what I see in the source: > > static int special_mapping_mremap(struct vm_area_struct *new_vma) > { > struct vm_special_mapping *sm = new_vma->vm_private_data; > > if (WARN_ON_ONCE(current->mm != new_vma->vm_mm)) > return -EFAULT; > > if (sm->mremap) > return sm->mremap(sm, new_vma); > > return 0; > } > > And I have checked that without this patch, I can remap only one page of > the vvar mapping. > I checked it a second time and I agree. The check on new_size is required in this case. > Thanks, > Andrei >
diff --git a/arch/arm64/kernel/vdso.c b/arch/arm64/kernel/vdso.c index 2e553468b183..e6ebdc184c1e 100644 --- a/arch/arm64/kernel/vdso.c +++ b/arch/arm64/kernel/vdso.c @@ -229,6 +229,17 @@ static vm_fault_t vvar_fault(const struct vm_special_mapping *sm, return vmf_insert_pfn(vma, vmf->address, pfn); } +static int vvar_mremap(const struct vm_special_mapping *sm, + struct vm_area_struct *new_vma) +{ + unsigned long new_size = new_vma->vm_end - new_vma->vm_start; + + if (new_size != VVAR_NR_PAGES * PAGE_SIZE) + return -EINVAL; + + return 0; +} + static int __setup_additional_pages(enum arch_vdso_type arch_index, struct mm_struct *mm, struct linux_binprm *bprm, @@ -311,6 +322,7 @@ static struct vm_special_mapping aarch32_vdso_spec[C_PAGES] = { { .name = "[vvar]", .fault = vvar_fault, + .mremap = vvar_mremap, }, { .name = "[vdso]", @@ -493,6 +505,7 @@ static struct vm_special_mapping vdso_spec[A_PAGES] __ro_after_init = { { .name = "[vvar]", .fault = vvar_fault, + .mremap = vvar_mremap, }, { .name = "[vdso]",
Forbid splitting VVAR VMA resulting in a stricter ABI and reducing the amount of corner-cases to consider while working further on VDSO time namespace support. As the offset from timens to VVAR page is computed compile-time, the pages in VVAR should stay together and not being partically mremap()'ed. Signed-off-by: Andrei Vagin <avagin@gmail.com> --- arch/arm64/kernel/vdso.c | 13 +++++++++++++ 1 file changed, 13 insertions(+)