Message ID | 20240312222843.2505560-8-rick.p.edgecombe@intel.com (mailing list archive) |
---|---|
State | New |
Headers | show |
Series | Cover a guard gap corner case | expand |
Le 12/03/2024 à 23:28, Rick Edgecombe a écrit : > Future changes will need to add a new member to struct > vm_unmapped_area_info. This would cause trouble for any call site that > doesn't initialize the struct. Currently every caller sets each member > manually, so if new members are added they will be uninitialized and the > core code parsing the struct will see garbage in the new member. > > It could be possible to initialize the new member manually to 0 at each > call site. This and a couple other options were discussed, and a working > consensus (see links) was that in general the best way to accomplish this > would be via static initialization with designated member initiators. > Having some struct vm_unmapped_area_info instances not zero initialized > will put those sites at risk of feeding garbage into vm_unmapped_area() if > the convention is to zero initialize the struct and any new member addition > misses a call site that initializes each member manually. > > It could be possible to leave the code mostly untouched, and just change > the line: > struct vm_unmapped_area_info info > to: > struct vm_unmapped_area_info info = {}; > > However, that would leave cleanup for the members that are manually set > to zero, as it would no longer be required. > > So to be reduce the chance of bugs via uninitialized members, instead > simply continue the process to initialize the struct this way tree wide. > This will zero any unspecified members. Move the member initializers to the > struct declaration when they are known at that time. Leave the members out > that were manually initialized to zero, as this would be redundant for > designated initializers. I understand from this text that, as agreed, this patch removes the pointless/redundant zero-init of individual members. But it is not what is done, see below ? > > Signed-off-by: Rick Edgecombe <rick.p.edgecombe@intel.com> > Acked-by: Michael Ellerman <mpe@ellerman.id.au> > Cc: Michael Ellerman <mpe@ellerman.id.au> > Cc: Nicholas Piggin <npiggin@gmail.com> > Cc: Christophe Leroy <christophe.leroy@csgroup.eu> > Cc: Aneesh Kumar K.V <aneesh.kumar@kernel.org> > Cc: Naveen N. Rao <naveen.n.rao@linux.ibm.com> > Cc: linuxppc-dev@lists.ozlabs.org > Link: https://lore.kernel.org/lkml/202402280912.33AEE7A9CF@keescook/#t > Link: https://lore.kernel.org/lkml/j7bfvig3gew3qruouxrh7z7ehjjafrgkbcmg6tcghhfh3rhmzi@wzlcoecgy5rs/ > --- > v3: > - Fixed spelling errors in log > - Be consistent about field vs member in log > > Hi, > > This patch was split and refactored out of a tree-wide change [0] to just > zero-init each struct vm_unmapped_area_info. The overall goal of the > series is to help shadow stack guard gaps. Currently, there is only one > arch with shadow stacks, but two more are in progress. It is compile tested > only. > > There was further discussion that this method of initializing the structs > while nice in some ways has a greater risk of introducing bugs in some of > the more complicated callers. Since this version was reviewed my arch > maintainers already, leave it as was already acknowledged. > > Thanks, > > Rick > > [0] https://lore.kernel.org/lkml/20240226190951.3240433-6-rick.p.edgecombe@intel.com/ > --- > arch/powerpc/mm/book3s64/slice.c | 23 ++++++++++++----------- > 1 file changed, 12 insertions(+), 11 deletions(-) > > diff --git a/arch/powerpc/mm/book3s64/slice.c b/arch/powerpc/mm/book3s64/slice.c > index c0b58afb9a47..6c7ac8c73a6c 100644 > --- a/arch/powerpc/mm/book3s64/slice.c > +++ b/arch/powerpc/mm/book3s64/slice.c > @@ -282,12 +282,12 @@ static unsigned long slice_find_area_bottomup(struct mm_struct *mm, > { > int pshift = max_t(int, mmu_psize_defs[psize].shift, PAGE_SHIFT); > unsigned long found, next_end; > - struct vm_unmapped_area_info info; > - > - info.flags = 0; > - info.length = len; > - info.align_mask = PAGE_MASK & ((1ul << pshift) - 1); > - info.align_offset = 0; > + struct vm_unmapped_area_info info = { > + .flags = 0, Please remove zero-init as agreed and explained in the commit message > + .length = len, > + .align_mask = PAGE_MASK & ((1ul << pshift) - 1), > + .align_offset = 0 Same here. > + }; > /* > * Check till the allow max value for this mmap request > */ > @@ -326,13 +326,14 @@ static unsigned long slice_find_area_topdown(struct mm_struct *mm, > { > int pshift = max_t(int, mmu_psize_defs[psize].shift, PAGE_SHIFT); > unsigned long found, prev; > - struct vm_unmapped_area_info info; > + struct vm_unmapped_area_info info = { > + .flags = VM_UNMAPPED_AREA_TOPDOWN, > + .length = len, > + .align_mask = PAGE_MASK & ((1ul << pshift) - 1), > + .align_offset = 0 Same here. > + }; > unsigned long min_addr = max(PAGE_SIZE, mmap_min_addr); > > - info.flags = VM_UNMAPPED_AREA_TOPDOWN; > - info.length = len; > - info.align_mask = PAGE_MASK & ((1ul << pshift) - 1); > - info.align_offset = 0; > /* > * If we are trying to allocate above DEFAULT_MAP_WINDOW > * Add the different to the mmap_base. Christophe
On Wed, 2024-03-13 at 06:44 +0000, Christophe Leroy wrote: > I understand from this text that, as agreed, this patch removes the > pointless/redundant zero-init of individual members. But it is not > what > is done, see below ? Err, right. I think I decided to leave it because it was already acked and there wasn't enough discussion on the ack to be sure. I will update it.
"Edgecombe, Rick P" <rick.p.edgecombe@intel.com> writes: > On Wed, 2024-03-13 at 06:44 +0000, Christophe Leroy wrote: >> I understand from this text that, as agreed, this patch removes the >> pointless/redundant zero-init of individual members. But it is not >> what >> is done, see below ? > > Err, right. I think I decided to leave it because it was already acked > and there wasn't enough discussion on the ack to be sure. I will update > it. That's fine by me, you can keep my ack. cheers
diff --git a/arch/powerpc/mm/book3s64/slice.c b/arch/powerpc/mm/book3s64/slice.c index c0b58afb9a47..6c7ac8c73a6c 100644 --- a/arch/powerpc/mm/book3s64/slice.c +++ b/arch/powerpc/mm/book3s64/slice.c @@ -282,12 +282,12 @@ static unsigned long slice_find_area_bottomup(struct mm_struct *mm, { int pshift = max_t(int, mmu_psize_defs[psize].shift, PAGE_SHIFT); unsigned long found, next_end; - struct vm_unmapped_area_info info; - - info.flags = 0; - info.length = len; - info.align_mask = PAGE_MASK & ((1ul << pshift) - 1); - info.align_offset = 0; + struct vm_unmapped_area_info info = { + .flags = 0, + .length = len, + .align_mask = PAGE_MASK & ((1ul << pshift) - 1), + .align_offset = 0 + }; /* * Check till the allow max value for this mmap request */ @@ -326,13 +326,14 @@ static unsigned long slice_find_area_topdown(struct mm_struct *mm, { int pshift = max_t(int, mmu_psize_defs[psize].shift, PAGE_SHIFT); unsigned long found, prev; - struct vm_unmapped_area_info info; + struct vm_unmapped_area_info info = { + .flags = VM_UNMAPPED_AREA_TOPDOWN, + .length = len, + .align_mask = PAGE_MASK & ((1ul << pshift) - 1), + .align_offset = 0 + }; unsigned long min_addr = max(PAGE_SIZE, mmap_min_addr); - info.flags = VM_UNMAPPED_AREA_TOPDOWN; - info.length = len; - info.align_mask = PAGE_MASK & ((1ul << pshift) - 1); - info.align_offset = 0; /* * If we are trying to allocate above DEFAULT_MAP_WINDOW * Add the different to the mmap_base.