Message ID | 20220426112705.3323-1-liusongtang@huawei.com (mailing list archive) |
---|---|
State | New |
Headers | show |
Series | mm/mprotect: reduce Committed_AS if memory protection is changed to PROT_NONE | expand |
On Tue, 26 Apr 2022 19:27:05 +0800 liusongtang <liusongtang@huawei.com> wrote: > If PROT_WRITE is set, the size of vm area will be added to Committed_AS. > However, if memory protection is changed to PROT_NONE, > the corresponding physical memory will not be used, but Committed_AS still > count the size of the PROT_NONE memory. > > This patch reduce Committed_AS and free the corresponding memory if > memory protection is changed to PROT_NONE. > > ... > > --- a/mm/mprotect.c > +++ b/mm/mprotect.c > @@ -497,6 +497,12 @@ mprotect_fixup(struct vm_area_struct *vma, struct vm_area_struct **pprev, > } > > success: > + if ((newflags & (VM_READ | VM_WRITE | VM_EXEC | VM_LOCKED | VM_ACCOUNT)) == VM_ACCOUNT) { > + zap_page_range(vma, start, end - start); > + newflags &= ~VM_ACCOUNT; > + vm_unacct_memory((end - start) >> PAGE_SHIFT); > + } > + > /* > * vm_flags and vm_page_prot are protected by the mmap_lock > * held in write mode. Surprised. If userspace does mprotect(addr, len. PROT_NONE) then mprotect(addr, len. PROT_READ), what is now at *addr? Zeroes?
On 2022/4/27 4:34, Andrew Morton wrote: > On Tue, 26 Apr 2022 19:27:05 +0800 liusongtang <liusongtang@huawei.com> wrote: > >> If PROT_WRITE is set, the size of vm area will be added to Committed_AS. >> However, if memory protection is changed to PROT_NONE, >> the corresponding physical memory will not be used, but Committed_AS still >> count the size of the PROT_NONE memory. >> >> This patch reduce Committed_AS and free the corresponding memory if >> memory protection is changed to PROT_NONE. >> >> ... >> >> --- a/mm/mprotect.c >> +++ b/mm/mprotect.c >> @@ -497,6 +497,12 @@ mprotect_fixup(struct vm_area_struct *vma, struct vm_area_struct **pprev, >> } >> >> success: >> + if ((newflags & (VM_READ | VM_WRITE | VM_EXEC | VM_LOCKED | VM_ACCOUNT)) == VM_ACCOUNT) { >> + zap_page_range(vma, start, end - start); >> + newflags &= ~VM_ACCOUNT; >> + vm_unacct_memory((end - start) >> PAGE_SHIFT); >> + } >> + >> /* >> * vm_flags and vm_page_prot are protected by the mmap_lock >> * held in write mode. > Surprised. If userspace does mprotect(addr, len. PROT_NONE) then > mprotect(addr, len. PROT_READ), what is now at *addr? Zeroes? > . 1. In the case mentioned above, I think data in *addr is invalid after mprotect(addr, len. PROT_NONE), so clear it will not cause a problem. 2. Another idea is we can check if this vm area is populated before reduce Committed_AS.
Greeting, FYI, we noticed the following commit (built with gcc-11): commit: 5e1e18b33470f3b7cff87166b39fc068333ec8be ("[PATCH] mm/mprotect: reduce Committed_AS if memory protection is changed to PROT_NONE") url: https://github.com/intel-lab-lkp/linux/commits/liusongtang/mm-mprotect-reduce-Committed_AS-if-memory-protection-is-changed-to-PROT_NONE/20220426-192805 base: https://github.com/hnaz/linux-mm master patch link: https://lore.kernel.org/linux-mm/20220426112705.3323-1-liusongtang@huawei.com in testcase: kvm-unit-tests version: kvm-unit-tests-x86_64-1a4529c-1_20220412 with following parameters: ucode: 0x28 on test machine: 8 threads 1 sockets Intel(R) Core(TM) i7-4790 v3 @ 3.60GHz with 6G memory caused below changes (please refer to attached dmesg/kmsg for entire log/backtrace): If you fix the issue, kindly add following tag Reported-by: kernel test robot <oliver.sang@intel.com> [33mSKIP[0m asyncpf (0 tests) [32mPASS[0m emulator (141 tests, 1 skipped) [31mFAIL[0m eventinj [32mPASS[0m hypercall (2 tests) [32mPASS[0m idt_test (4 tests) To reproduce: git clone https://github.com/intel/lkp-tests.git cd lkp-tests sudo bin/lkp install job.yaml # job file is attached in this email bin/lkp split-job --compatible job.yaml # generate the yaml file for lkp run sudo bin/lkp run generated-yaml-file # if come across any failure that blocks the test, # please remove ~/.lkp and /lkp dir to run from a clean state.
On 26.04.22 22:34, Andrew Morton wrote: > On Tue, 26 Apr 2022 19:27:05 +0800 liusongtang <liusongtang@huawei.com> wrote: > >> If PROT_WRITE is set, the size of vm area will be added to Committed_AS. >> However, if memory protection is changed to PROT_NONE, >> the corresponding physical memory will not be used, but Committed_AS still >> count the size of the PROT_NONE memory. >> >> This patch reduce Committed_AS and free the corresponding memory if >> memory protection is changed to PROT_NONE. >> >> ... >> >> --- a/mm/mprotect.c >> +++ b/mm/mprotect.c >> @@ -497,6 +497,12 @@ mprotect_fixup(struct vm_area_struct *vma, struct vm_area_struct **pprev, >> } >> >> success: >> + if ((newflags & (VM_READ | VM_WRITE | VM_EXEC | VM_LOCKED | VM_ACCOUNT)) == VM_ACCOUNT) { >> + zap_page_range(vma, start, end - start); >> + newflags &= ~VM_ACCOUNT; >> + vm_unacct_memory((end - start) >> PAGE_SHIFT); >> + } >> + >> /* >> * vm_flags and vm_page_prot are protected by the mmap_lock >> * held in write mode. > > Surprised. If userspace does mprotect(addr, len. PROT_NONE) then > mprotect(addr, len. PROT_READ), what is now at *addr? Zeroes? > I don't think so. I don't see any pages getting zapped at my quick test (unless it's wrong) shows that data is maintained. Further, it could violate POSIX semantics. So this patch is wrong, there might have been anonymous pages populated.
diff --git a/mm/mprotect.c b/mm/mprotect.c index b69ce7a..c3121e6 100644 --- a/mm/mprotect.c +++ b/mm/mprotect.c @@ -497,6 +497,12 @@ mprotect_fixup(struct vm_area_struct *vma, struct vm_area_struct **pprev, } success: + if ((newflags & (VM_READ | VM_WRITE | VM_EXEC | VM_LOCKED | VM_ACCOUNT)) == VM_ACCOUNT) { + zap_page_range(vma, start, end - start); + newflags &= ~VM_ACCOUNT; + vm_unacct_memory((end - start) >> PAGE_SHIFT); + } + /* * vm_flags and vm_page_prot are protected by the mmap_lock * held in write mode.
If PROT_WRITE is set, the size of vm area will be added to Committed_AS. However, if memory protection is changed to PROT_NONE, the corresponding physical memory will not be used, but Committed_AS still count the size of the PROT_NONE memory. This patch reduce Committed_AS and free the corresponding memory if memory protection is changed to PROT_NONE. Signed-off-by: liusongtang <liusongtang@huawei.com> --- mm/mprotect.c | 6 ++++++ 1 file changed, 6 insertions(+)