Message ID | 20230128063229.989058-2-mawupeng1@huawei.com (mailing list archive) |
---|---|
State | New |
Headers | show |
Series | Add overflow checks for several syscalls | expand |
On 28.01.23 07:32, Wupeng Ma wrote: > From: Ma Wupeng <mawupeng1@huawei.com> > > While testing mlock, we have a problem if the len of mlock is ULONG_MAX. > The return value of mlock is zero. But nothing will be locked since the > len in do_mlock overflows to zero due to the following code in mlock: > > len = PAGE_ALIGN(len + (offset_in_page(start))); > > The same problem happens in munlock. > > Add new check and return -EINVAL to fix this overflowing scenarios since > they are absolutely wrong. > > Return 0 early to avoid burn a bunch of cpu cycles if len == 0. > > Signed-off-by: Ma Wupeng <mawupeng1@huawei.com> > --- > mm/mlock.c | 14 ++++++++++++-- > 1 file changed, 12 insertions(+), 2 deletions(-) > > diff --git a/mm/mlock.c b/mm/mlock.c > index 7032f6dd0ce1..eb09968ba27f 100644 > --- a/mm/mlock.c > +++ b/mm/mlock.c > @@ -478,8 +478,6 @@ static int apply_vma_lock_flags(unsigned long start, size_t len, > end = start + len; > if (end < start) > return -EINVAL; > - if (end == start) > - return 0; > vma = mas_walk(&mas); > if (!vma) > return -ENOMEM; > @@ -575,7 +573,13 @@ static __must_check int do_mlock(unsigned long start, size_t len, vm_flags_t fla > if (!can_do_mlock()) > return -EPERM; > > + if (!len) > + return 0; > + > len = PAGE_ALIGN(len + (offset_in_page(start))); > + if (!len) > + return -EINVAL; > + > start &= PAGE_MASK; The "ordinary" overflows are detected in apply_vma_lock_flags(), correct?
On 2023/2/4 1:14, David Hildenbrand wrote: > On 28.01.23 07:32, Wupeng Ma wrote: >> From: Ma Wupeng <mawupeng1@huawei.com> >> >> While testing mlock, we have a problem if the len of mlock is ULONG_MAX. >> The return value of mlock is zero. But nothing will be locked since the >> len in do_mlock overflows to zero due to the following code in mlock: >> >> len = PAGE_ALIGN(len + (offset_in_page(start))); >> >> The same problem happens in munlock. >> >> Add new check and return -EINVAL to fix this overflowing scenarios since >> they are absolutely wrong. >> >> Return 0 early to avoid burn a bunch of cpu cycles if len == 0. >> >> Signed-off-by: Ma Wupeng <mawupeng1@huawei.com> >> --- >> mm/mlock.c | 14 ++++++++++++-- >> 1 file changed, 12 insertions(+), 2 deletions(-) >> >> diff --git a/mm/mlock.c b/mm/mlock.c >> index 7032f6dd0ce1..eb09968ba27f 100644 >> --- a/mm/mlock.c >> +++ b/mm/mlock.c >> @@ -478,8 +478,6 @@ static int apply_vma_lock_flags(unsigned long start, size_t len, >> end = start + len; >> if (end < start) >> return -EINVAL; >> - if (end == start) >> - return 0; >> vma = mas_walk(&mas); >> if (!vma) >> return -ENOMEM; >> @@ -575,7 +573,13 @@ static __must_check int do_mlock(unsigned long start, size_t len, vm_flags_t fla >> if (!can_do_mlock()) >> return -EPERM; >> + if (!len) >> + return 0; >> + >> len = PAGE_ALIGN(len + (offset_in_page(start))); >> + if (!len) >> + return -EINVAL; >> + >> start &= PAGE_MASK; > > The "ordinary" overflows are detected in apply_vma_lock_flags(), correct? Overflow is not checked anywhere however the ordinary return early if len == 0 is detected in apply_vma_lock_flags(). do_mlock apply_vma_lock_flags end = start + len; if (end == start) return 0; Move the checking to the begin is easier to detect overflows and make the logic clearer and avoid burn a bunch of cpu cycles. >
On 06.02.23 01:48, mawupeng wrote: > > > On 2023/2/4 1:14, David Hildenbrand wrote: >> On 28.01.23 07:32, Wupeng Ma wrote: >>> From: Ma Wupeng <mawupeng1@huawei.com> >>> >>> While testing mlock, we have a problem if the len of mlock is ULONG_MAX. >>> The return value of mlock is zero. But nothing will be locked since the >>> len in do_mlock overflows to zero due to the following code in mlock: >>> >>> len = PAGE_ALIGN(len + (offset_in_page(start))); >>> >>> The same problem happens in munlock. >>> >>> Add new check and return -EINVAL to fix this overflowing scenarios since >>> they are absolutely wrong. >>> >>> Return 0 early to avoid burn a bunch of cpu cycles if len == 0. >>> >>> Signed-off-by: Ma Wupeng <mawupeng1@huawei.com> >>> --- >>> mm/mlock.c | 14 ++++++++++++-- >>> 1 file changed, 12 insertions(+), 2 deletions(-) >>> >>> diff --git a/mm/mlock.c b/mm/mlock.c >>> index 7032f6dd0ce1..eb09968ba27f 100644 >>> --- a/mm/mlock.c >>> +++ b/mm/mlock.c >>> @@ -478,8 +478,6 @@ static int apply_vma_lock_flags(unsigned long start, size_t len, >>> end = start + len; >>> if (end < start) >>> return -EINVAL; >>> - if (end == start) >>> - return 0; >>> vma = mas_walk(&mas); >>> if (!vma) >>> return -ENOMEM; >>> @@ -575,7 +573,13 @@ static __must_check int do_mlock(unsigned long start, size_t len, vm_flags_t fla >>> if (!can_do_mlock()) >>> return -EPERM; >>> + if (!len) >>> + return 0; >>> + >>> len = PAGE_ALIGN(len + (offset_in_page(start))); >>> + if (!len) >>> + return -EINVAL; >>> + >>> start &= PAGE_MASK; >> >> The "ordinary" overflows are detected in apply_vma_lock_flags(), correct? > > Overflow is not checked anywhere however the ordinary return early if len == 0 is detected in apply_vma_lock_flags(). > I meant the end = start + len; if (end < start) return -EINVAL; Essentially, what I wanted to double-check is that with your changes, we catch all kinds of overflows as documented in the man page, correct?
On 2023/2/7 1:05, David Hildenbrand wrote: > On 06.02.23 01:48, mawupeng wrote: >> >> >> On 2023/2/4 1:14, David Hildenbrand wrote: >>> On 28.01.23 07:32, Wupeng Ma wrote: >>>> From: Ma Wupeng <mawupeng1@huawei.com> >>>> >>>> While testing mlock, we have a problem if the len of mlock is ULONG_MAX. >>>> The return value of mlock is zero. But nothing will be locked since the >>>> len in do_mlock overflows to zero due to the following code in mlock: >>>> >>>> len = PAGE_ALIGN(len + (offset_in_page(start))); >>>> >>>> The same problem happens in munlock. >>>> >>>> Add new check and return -EINVAL to fix this overflowing scenarios since >>>> they are absolutely wrong. >>>> >>>> Return 0 early to avoid burn a bunch of cpu cycles if len == 0. >>>> >>>> Signed-off-by: Ma Wupeng <mawupeng1@huawei.com> >>>> --- >>>> mm/mlock.c | 14 ++++++++++++-- >>>> 1 file changed, 12 insertions(+), 2 deletions(-) >>>> >>>> diff --git a/mm/mlock.c b/mm/mlock.c >>>> index 7032f6dd0ce1..eb09968ba27f 100644 >>>> --- a/mm/mlock.c >>>> +++ b/mm/mlock.c >>>> @@ -478,8 +478,6 @@ static int apply_vma_lock_flags(unsigned long start, size_t len, >>>> end = start + len; >>>> if (end < start) >>>> return -EINVAL; >>>> - if (end == start) >>>> - return 0; >>>> vma = mas_walk(&mas); >>>> if (!vma) >>>> return -ENOMEM; >>>> @@ -575,7 +573,13 @@ static __must_check int do_mlock(unsigned long start, size_t len, vm_flags_t fla >>>> if (!can_do_mlock()) >>>> return -EPERM; >>>> + if (!len) >>>> + return 0; >>>> + >>>> len = PAGE_ALIGN(len + (offset_in_page(start))); >>>> + if (!len) >>>> + return -EINVAL; >>>> + >>>> start &= PAGE_MASK; >>> >>> The "ordinary" overflows are detected in apply_vma_lock_flags(), correct? >> >> Overflow is not checked anywhere however the ordinary return early if len == 0 is detected in apply_vma_lock_flags(). >> > > I meant the > > end = start + len; > if (end < start) > return -EINVAL; > > Essentially, what I wanted to double-check is that with your changes, we catch all kinds of overflows as documented in the man page, correct? Oh i see. You are right, The "ordinary" overflows are detected for mlock/munlock in apply_vma_lock_flags(). Yes, we may need to update the man page for all these four syscalls. Thanks, mawupeng. >
On 07.02.23 02:24, mawupeng wrote: > > > On 2023/2/7 1:05, David Hildenbrand wrote: >> On 06.02.23 01:48, mawupeng wrote: >>> >>> >>> On 2023/2/4 1:14, David Hildenbrand wrote: >>>> On 28.01.23 07:32, Wupeng Ma wrote: >>>>> From: Ma Wupeng <mawupeng1@huawei.com> >>>>> >>>>> While testing mlock, we have a problem if the len of mlock is ULONG_MAX. >>>>> The return value of mlock is zero. But nothing will be locked since the >>>>> len in do_mlock overflows to zero due to the following code in mlock: >>>>> >>>>> len = PAGE_ALIGN(len + (offset_in_page(start))); >>>>> >>>>> The same problem happens in munlock. >>>>> >>>>> Add new check and return -EINVAL to fix this overflowing scenarios since >>>>> they are absolutely wrong. >>>>> >>>>> Return 0 early to avoid burn a bunch of cpu cycles if len == 0. >>>>> >>>>> Signed-off-by: Ma Wupeng <mawupeng1@huawei.com> >>>>> --- >>>>> mm/mlock.c | 14 ++++++++++++-- >>>>> 1 file changed, 12 insertions(+), 2 deletions(-) >>>>> >>>>> diff --git a/mm/mlock.c b/mm/mlock.c >>>>> index 7032f6dd0ce1..eb09968ba27f 100644 >>>>> --- a/mm/mlock.c >>>>> +++ b/mm/mlock.c >>>>> @@ -478,8 +478,6 @@ static int apply_vma_lock_flags(unsigned long start, size_t len, >>>>> end = start + len; >>>>> if (end < start) >>>>> return -EINVAL; >>>>> - if (end == start) >>>>> - return 0; >>>>> vma = mas_walk(&mas); >>>>> if (!vma) >>>>> return -ENOMEM; >>>>> @@ -575,7 +573,13 @@ static __must_check int do_mlock(unsigned long start, size_t len, vm_flags_t fla >>>>> if (!can_do_mlock()) >>>>> return -EPERM; >>>>> + if (!len) >>>>> + return 0; >>>>> + >>>>> len = PAGE_ALIGN(len + (offset_in_page(start))); >>>>> + if (!len) >>>>> + return -EINVAL; >>>>> + >>>>> start &= PAGE_MASK; >>>> >>>> The "ordinary" overflows are detected in apply_vma_lock_flags(), correct? >>> >>> Overflow is not checked anywhere however the ordinary return early if len == 0 is detected in apply_vma_lock_flags(). >>> >> >> I meant the >> >> end = start + len; >> if (end < start) >> return -EINVAL; >> >> Essentially, what I wanted to double-check is that with your changes, we catch all kinds of overflows as documented in the man page, correct? > > Oh i see. You are right, The "ordinary" overflows are detected for mlock/munlock in apply_vma_lock_flags(). > > Yes, we may need to update the man page for all these four syscalls. E.g., mlock() already documents "EINVAL (mlock(), mlock2(), and munlock()) The result of the addition addr+len was less than addr (e.g., the addition may have resulted in an overflow)." Just to rephrase my question what I wanted to double-check: are we now identifying all such overflows or are you aware of other corner cases?
On 2023/2/8 21:51, David Hildenbrand wrote: > On 07.02.23 02:24, mawupeng wrote: >> >> >> On 2023/2/7 1:05, David Hildenbrand wrote: >>> On 06.02.23 01:48, mawupeng wrote: >>>> >>>> >>>> On 2023/2/4 1:14, David Hildenbrand wrote: >>>>> On 28.01.23 07:32, Wupeng Ma wrote: >>>>>> From: Ma Wupeng <mawupeng1@huawei.com> >>>>>> >>>>>> While testing mlock, we have a problem if the len of mlock is ULONG_MAX. >>>>>> The return value of mlock is zero. But nothing will be locked since the >>>>>> len in do_mlock overflows to zero due to the following code in mlock: >>>>>> >>>>>> len = PAGE_ALIGN(len + (offset_in_page(start))); >>>>>> >>>>>> The same problem happens in munlock. >>>>>> >>>>>> Add new check and return -EINVAL to fix this overflowing scenarios since >>>>>> they are absolutely wrong. >>>>>> >>>>>> Return 0 early to avoid burn a bunch of cpu cycles if len == 0. >>>>>> >>>>>> Signed-off-by: Ma Wupeng <mawupeng1@huawei.com> >>>>>> --- >>>>>> mm/mlock.c | 14 ++++++++++++-- >>>>>> 1 file changed, 12 insertions(+), 2 deletions(-) >>>>>> >>>>>> diff --git a/mm/mlock.c b/mm/mlock.c >>>>>> index 7032f6dd0ce1..eb09968ba27f 100644 >>>>>> --- a/mm/mlock.c >>>>>> +++ b/mm/mlock.c >>>>>> @@ -478,8 +478,6 @@ static int apply_vma_lock_flags(unsigned long start, size_t len, >>>>>> end = start + len; >>>>>> if (end < start) >>>>>> return -EINVAL; >>>>>> - if (end == start) >>>>>> - return 0; >>>>>> vma = mas_walk(&mas); >>>>>> if (!vma) >>>>>> return -ENOMEM; >>>>>> @@ -575,7 +573,13 @@ static __must_check int do_mlock(unsigned long start, size_t len, vm_flags_t fla >>>>>> if (!can_do_mlock()) >>>>>> return -EPERM; >>>>>> + if (!len) >>>>>> + return 0; >>>>>> + >>>>>> len = PAGE_ALIGN(len + (offset_in_page(start))); >>>>>> + if (!len) >>>>>> + return -EINVAL; >>>>>> + >>>>>> start &= PAGE_MASK; >>>>> >>>>> The "ordinary" overflows are detected in apply_vma_lock_flags(), correct? >>>> >>>> Overflow is not checked anywhere however the ordinary return early if len == 0 is detected in apply_vma_lock_flags(). >>>> >>> >>> I meant the >>> >>> end = start + len; >>> if (end < start) >>> return -EINVAL; >>> >>> Essentially, what I wanted to double-check is that with your changes, we catch all kinds of overflows as documented in the man page, correct? >> >> Oh i see. You are right, The "ordinary" overflows are detected for mlock/munlock in apply_vma_lock_flags(). >> >> Yes, we may need to update the man page for all these four syscalls. > > E.g., mlock() already documents "EINVAL (mlock(), mlock2(), and munlock()) The result of the addition addr+len was less than addr (e.g., the addition may have resulted in an overflow)." > > Just to rephrase my question what I wanted to double-check: are we now identifying all such overflows or are you aware of other corner cases? AFAICT. There is no cornel cases now. Pervious normal overflow can be detected via the following codes as you mentioned. end = start + len; if (end < start) return -EINVAL; this is fine for normal overflows. But the len may be zero in PAGE_ALIGN(len + (offset_in_page(start))). This leed to two scenarios for len == 0: a) user pass len as zero. this is fine, we don't need to do anything. b) overflow scenarios. we need to return err rather than response ok as above. >
diff --git a/mm/mlock.c b/mm/mlock.c index 7032f6dd0ce1..eb09968ba27f 100644 --- a/mm/mlock.c +++ b/mm/mlock.c @@ -478,8 +478,6 @@ static int apply_vma_lock_flags(unsigned long start, size_t len, end = start + len; if (end < start) return -EINVAL; - if (end == start) - return 0; vma = mas_walk(&mas); if (!vma) return -ENOMEM; @@ -575,7 +573,13 @@ static __must_check int do_mlock(unsigned long start, size_t len, vm_flags_t fla if (!can_do_mlock()) return -EPERM; + if (!len) + return 0; + len = PAGE_ALIGN(len + (offset_in_page(start))); + if (!len) + return -EINVAL; + start &= PAGE_MASK; lock_limit = rlimit(RLIMIT_MEMLOCK); @@ -635,7 +639,13 @@ SYSCALL_DEFINE2(munlock, unsigned long, start, size_t, len) start = untagged_addr(start); + if (!len) + return 0; + len = PAGE_ALIGN(len + (offset_in_page(start))); + if (!len) + return -EINVAL; + start &= PAGE_MASK; if (mmap_write_lock_killable(current->mm))