diff mbox series

linux-user: Fix loading of BSS segments

Message ID c9106487-dc4d-120a-bd48-665b3c617287@gmail.com (mailing list archive)
State New, archived
Headers show
Series linux-user: Fix loading of BSS segments | expand

Commit Message

Giuseppe Musacchio Dec. 17, 2020, 10:17 a.m. UTC
Some ELF binaries encode the .bss section as an extension of the data
ones by setting the segment p_memsz > p_filesz. Some other binaries take
a different route and encode it as a stand-alone PT_LOAD segment with
p_filesz = 0 and p_memsz > 0.

Both the encodings are actually correct per ELF specification but the
ELF loader had some troubles in handling the former: with the old logic
it was very likely to get Qemu to crash in zero_bss when trying to
access unmapped memory.

zero_bss isn't meant to allocate whole zero-filled segments but to
"complete" a previously mapped segment with the needed zero bits.

The fix is pretty simple, if the segment is completely zero-filled we
simply allocate one or more pages (according to p_memsz) and avoid
calling zero_bss altogether.

Signed-off-by: Giuseppe Musacchio <thatlemon@gmail.com>
---
 linux-user/elfload.c | 30 ++++++++++++++++++++----------
 1 file changed, 20 insertions(+), 10 deletions(-)

Comments

Laurent Vivier Dec. 17, 2020, 10:48 a.m. UTC | #1
Le 17/12/2020 à 11:17, Giuseppe Musacchio a écrit :
> Some ELF binaries encode the .bss section as an extension of the data
> ones by setting the segment p_memsz > p_filesz. Some other binaries take
> a different route and encode it as a stand-alone PT_LOAD segment with
> p_filesz = 0 and p_memsz > 0.
> 
> Both the encodings are actually correct per ELF specification but the
> ELF loader had some troubles in handling the former: with the old logic
> it was very likely to get Qemu to crash in zero_bss when trying to
> access unmapped memory.
> 
> zero_bss isn't meant to allocate whole zero-filled segments but to
> "complete" a previously mapped segment with the needed zero bits.
> 
> The fix is pretty simple, if the segment is completely zero-filled we
> simply allocate one or more pages (according to p_memsz) and avoid
> calling zero_bss altogether.


Is this also fixing what "linux-user/elfload: Fix handling of pure BSS segments" [1] patch fixes?

Thanks,
Laurent
[1] https://patchew.org/QEMU/20201118165206.2826-1-steplong@quicinc.com/

> Signed-off-by: Giuseppe Musacchio <thatlemon@gmail.com>
> ---
>  linux-user/elfload.c | 30 ++++++++++++++++++++----------
>  1 file changed, 20 insertions(+), 10 deletions(-)
> 
> diff --git a/linux-user/elfload.c b/linux-user/elfload.c
> index 0b02a92602..a16c240e0f 100644
> --- a/linux-user/elfload.c
> +++ b/linux-user/elfload.c
> @@ -2776,14 +2776,16 @@ static void load_elf_image(const char *image_name, int image_fd,
>              vaddr = load_bias + eppnt->p_vaddr;
>              vaddr_po = TARGET_ELF_PAGEOFFSET(vaddr);
>              vaddr_ps = TARGET_ELF_PAGESTART(vaddr);
> -            vaddr_len = TARGET_ELF_PAGELENGTH(eppnt->p_filesz + vaddr_po);
> +
> +            vaddr_ef = vaddr + eppnt->p_filesz;
> +            vaddr_em = vaddr + eppnt->p_memsz;
>  
>              /*
> -             * Some segments may be completely empty without any backing file
> -             * segment, in that case just let zero_bss allocate an empty buffer
> -             * for it.
> +             * Some segments may be completely empty, with a non-zero p_memsz
> +             * but no backing file segment.
>               */
>              if (eppnt->p_filesz != 0) {
> +                vaddr_len = TARGET_ELF_PAGELENGTH(eppnt->p_filesz + vaddr_po);
>                  error = target_mmap(vaddr_ps, vaddr_len, elf_prot,
>                                      MAP_PRIVATE | MAP_FIXED,
>                                      image_fd, eppnt->p_offset - vaddr_po);
> @@ -2791,14 +2793,22 @@ static void load_elf_image(const char *image_name, int image_fd,
>                  if (error == -1) {
>                      goto exit_mmap;
>                  }
> -            }
>  
> -            vaddr_ef = vaddr + eppnt->p_filesz;
> -            vaddr_em = vaddr + eppnt->p_memsz;
> +                /*
> +                 * If the load segment requests extra zeros (e.g. bss), map it.
> +                 */
> +                if (eppnt->p_filesz < eppnt->p_memsz) {
> +                    zero_bss(vaddr_ef, vaddr_em, elf_prot);
> +                }
> +            } else if (eppnt->p_memsz != 0) {
> +                vaddr_len = TARGET_ELF_PAGELENGTH(eppnt->p_memsz + vaddr_po);
> +                error = target_mmap(vaddr_ps, vaddr_len, elf_prot,
> +                                    MAP_PRIVATE | MAP_FIXED | MAP_ANONYMOUS,
> +                                    -1, 0);
>  
> -            /* If the load segment requests extra zeros (e.g. bss), map it.  */
> -            if (vaddr_ef < vaddr_em) {
> -                zero_bss(vaddr_ef, vaddr_em, elf_prot);
> +                if (error == -1) {
> +                    goto exit_mmap;
> +                }
>              }
>  
>              /* Find the full program boundaries.  */
>
Giuseppe Musacchio Dec. 17, 2020, 10:59 a.m. UTC | #2
On 17/12/20 11:48, Laurent Vivier wrote:
> Le 17/12/2020 à 11:17, Giuseppe Musacchio a écrit :
>> Some ELF binaries encode the .bss section as an extension of the data
>> ones by setting the segment p_memsz > p_filesz. Some other binaries take
>> a different route and encode it as a stand-alone PT_LOAD segment with
>> p_filesz = 0 and p_memsz > 0.
>>
>> Both the encodings are actually correct per ELF specification but the
>> ELF loader had some troubles in handling the former: with the old logic
>> it was very likely to get Qemu to crash in zero_bss when trying to
>> access unmapped memory.
>>
>> zero_bss isn't meant to allocate whole zero-filled segments but to
>> "complete" a previously mapped segment with the needed zero bits.
>>
>> The fix is pretty simple, if the segment is completely zero-filled we
>> simply allocate one or more pages (according to p_memsz) and avoid
>> calling zero_bss altogether.
> 
> 
> Is this also fixing what "linux-user/elfload: Fix handling of pure BSS segments" [1] patch fixes?
> 

Yes, I guess I'm not the first one to hit this problem.

> Thanks,
> Laurent
> [1] https://patchew.org/QEMU/20201118165206.2826-1-steplong@quicinc.com/
> 
>> Signed-off-by: Giuseppe Musacchio <thatlemon@gmail.com>
>> ---
>>  linux-user/elfload.c | 30 ++++++++++++++++++++----------
>>  1 file changed, 20 insertions(+), 10 deletions(-)
>>
>> diff --git a/linux-user/elfload.c b/linux-user/elfload.c
>> index 0b02a92602..a16c240e0f 100644
>> --- a/linux-user/elfload.c
>> +++ b/linux-user/elfload.c
>> @@ -2776,14 +2776,16 @@ static void load_elf_image(const char *image_name, int image_fd,
>>              vaddr = load_bias + eppnt->p_vaddr;
>>              vaddr_po = TARGET_ELF_PAGEOFFSET(vaddr);
>>              vaddr_ps = TARGET_ELF_PAGESTART(vaddr);
>> -            vaddr_len = TARGET_ELF_PAGELENGTH(eppnt->p_filesz + vaddr_po);
>> +
>> +            vaddr_ef = vaddr + eppnt->p_filesz;
>> +            vaddr_em = vaddr + eppnt->p_memsz;
>>  
>>              /*
>> -             * Some segments may be completely empty without any backing file
>> -             * segment, in that case just let zero_bss allocate an empty buffer
>> -             * for it.
>> +             * Some segments may be completely empty, with a non-zero p_memsz
>> +             * but no backing file segment.
>>               */
>>              if (eppnt->p_filesz != 0) {
>> +                vaddr_len = TARGET_ELF_PAGELENGTH(eppnt->p_filesz + vaddr_po);
>>                  error = target_mmap(vaddr_ps, vaddr_len, elf_prot,
>>                                      MAP_PRIVATE | MAP_FIXED,
>>                                      image_fd, eppnt->p_offset - vaddr_po);
>> @@ -2791,14 +2793,22 @@ static void load_elf_image(const char *image_name, int image_fd,
>>                  if (error == -1) {
>>                      goto exit_mmap;
>>                  }
>> -            }
>>  
>> -            vaddr_ef = vaddr + eppnt->p_filesz;
>> -            vaddr_em = vaddr + eppnt->p_memsz;
>> +                /*
>> +                 * If the load segment requests extra zeros (e.g. bss), map it.
>> +                 */
>> +                if (eppnt->p_filesz < eppnt->p_memsz) {
>> +                    zero_bss(vaddr_ef, vaddr_em, elf_prot);
>> +                }
>> +            } else if (eppnt->p_memsz != 0) {
>> +                vaddr_len = TARGET_ELF_PAGELENGTH(eppnt->p_memsz + vaddr_po);
>> +                error = target_mmap(vaddr_ps, vaddr_len, elf_prot,
>> +                                    MAP_PRIVATE | MAP_FIXED | MAP_ANONYMOUS,
>> +                                    -1, 0);
>>  
>> -            /* If the load segment requests extra zeros (e.g. bss), map it.  */
>> -            if (vaddr_ef < vaddr_em) {
>> -                zero_bss(vaddr_ef, vaddr_em, elf_prot);
>> +                if (error == -1) {
>> +                    goto exit_mmap;
>> +                }
>>              }
>>  
>>              /* Find the full program boundaries.  */
>>
>
Stephen Long Dec. 17, 2020, 5:40 p.m. UTC | #3
Laurent Vivier wrote:
> Is this also fixing what "linux-user/elfload: Fix handling of pure BSS segments" [1] patch fixes?

I can do a v2 of my patch with a better commit description and addressing Peter's
questions, but feel free to take this patch instead. It has a much
clearer commit msg and seems to be more correct to me.
Laurent Vivier Dec. 17, 2020, 8:07 p.m. UTC | #4
Le 17/12/2020 à 18:40, Stephen Long a écrit :
> Laurent Vivier wrote:
>> Is this also fixing what "linux-user/elfload: Fix handling of pure BSS segments" [1] patch fixes?
> 
> I can do a v2 of my patch with a better commit description and addressing Peter's
> questions, but feel free to take this patch instead. It has a much
> clearer commit msg and seems to be more correct to me.
> 

If this patch fixes also your problem I will take this one instead.

Thanks,
Laurent
Stephen Long Dec. 17, 2020, 8:27 p.m. UTC | #5
Laurent Vivier wrote:
> If this patch fixes also your problem I will take this one instead.

Awesome, I can confirm that this patch fixes my problem.

Thanks,
Stephen
Laurent Vivier Dec. 19, 2020, 10:55 a.m. UTC | #6
Le 17/12/2020 à 11:17, Giuseppe Musacchio a écrit :
> Some ELF binaries encode the .bss section as an extension of the data
> ones by setting the segment p_memsz > p_filesz. Some other binaries take
> a different route and encode it as a stand-alone PT_LOAD segment with
> p_filesz = 0 and p_memsz > 0.
> 
> Both the encodings are actually correct per ELF specification but the
> ELF loader had some troubles in handling the former: with the old logic
> it was very likely to get Qemu to crash in zero_bss when trying to
> access unmapped memory.
> 
> zero_bss isn't meant to allocate whole zero-filled segments but to
> "complete" a previously mapped segment with the needed zero bits.
> 
> The fix is pretty simple, if the segment is completely zero-filled we
> simply allocate one or more pages (according to p_memsz) and avoid
> calling zero_bss altogether.

So, the current code manages the bss segment when the memory page has already
been allocated for the data segment by zeroing it:

+----------------------------------+
 PAGE                              |
 ----------+------------+          |
 DATA      |   BSS      |          |

So your patch fixes the case when there is no data segment and thus no page
to complete:

+----------------------------------+
 PAGE                              |
 ----------+                       |
 BSS       |                       |


But could we have a case where the BSS starts in a page allocated for the
data segment but needs more pages?

+----------------------------------+----------------------------------+
 PAGE                              | PAGE                             |
 ------------------------+----------------------------+               |
 DATA                    | BSS                        |               |

In this case we should also allocate the page, and the previous case is only a
special case (data segment = 0) of this one.

so something like (approxymately):

if (eppnt->p_filesz != 0) {
   target_mmap()
   if (vaddr_ef < vaddr_mem) {
       zero_bss(vaddr_ef, MIN(vaddr_mem, vaddr_ps + vaddr_len))
   }
}
if (vaddr_ps + vaddr_len < vaddr_mem) {
  target_mmap(vaddr_ps + vaddr_len, vaddr_ps + vaddr_len - vaddr_mem - 1,
              elf_prot, MAP_PRIVATE | MAP_FIXED | MAP_ANONYMOUS, -1, 0);
}

I think your fix is correct, but I'm wondering if we can have something more
generic, if we can cover an other possible case.

If you think we don't need to introduce more complexity for a case that can't
happen I will queue your patch as is.

Thanks,
Laurent

> Signed-off-by: Giuseppe Musacchio <thatlemon@gmail.com>
> ---
>  linux-user/elfload.c | 30 ++++++++++++++++++++----------
>  1 file changed, 20 insertions(+), 10 deletions(-)
> 
> diff --git a/linux-user/elfload.c b/linux-user/elfload.c
> index 0b02a92602..a16c240e0f 100644
> --- a/linux-user/elfload.c
> +++ b/linux-user/elfload.c
> @@ -2776,14 +2776,16 @@ static void load_elf_image(const char *image_name, int image_fd,
>              vaddr = load_bias + eppnt->p_vaddr;
>              vaddr_po = TARGET_ELF_PAGEOFFSET(vaddr);
>              vaddr_ps = TARGET_ELF_PAGESTART(vaddr);
> -            vaddr_len = TARGET_ELF_PAGELENGTH(eppnt->p_filesz + vaddr_po);
> +
> +            vaddr_ef = vaddr + eppnt->p_filesz;
> +            vaddr_em = vaddr + eppnt->p_memsz;
>  
>              /*
> -             * Some segments may be completely empty without any backing file
> -             * segment, in that case just let zero_bss allocate an empty buffer
> -             * for it.
> +             * Some segments may be completely empty, with a non-zero p_memsz
> +             * but no backing file segment.
>               */
>              if (eppnt->p_filesz != 0) {
> +                vaddr_len = TARGET_ELF_PAGELENGTH(eppnt->p_filesz + vaddr_po);
>                  error = target_mmap(vaddr_ps, vaddr_len, elf_prot,
>                                      MAP_PRIVATE | MAP_FIXED,
>                                      image_fd, eppnt->p_offset - vaddr_po);
> @@ -2791,14 +2793,22 @@ static void load_elf_image(const char *image_name, int image_fd,
>                  if (error == -1) {
>                      goto exit_mmap;
>                  }
> -            }
>  
> -            vaddr_ef = vaddr + eppnt->p_filesz;
> -            vaddr_em = vaddr + eppnt->p_memsz;
> +                /*
> +                 * If the load segment requests extra zeros (e.g. bss), map it.
> +                 */
> +                if (eppnt->p_filesz < eppnt->p_memsz) {
> +                    zero_bss(vaddr_ef, vaddr_em, elf_prot);
> +                }
> +            } else if (eppnt->p_memsz != 0) {
> +                vaddr_len = TARGET_ELF_PAGELENGTH(eppnt->p_memsz + vaddr_po);
> +                error = target_mmap(vaddr_ps, vaddr_len, elf_prot,
> +                                    MAP_PRIVATE | MAP_FIXED | MAP_ANONYMOUS,
> +                                    -1, 0);
>  
> -            /* If the load segment requests extra zeros (e.g. bss), map it.  */
> -            if (vaddr_ef < vaddr_em) {
> -                zero_bss(vaddr_ef, vaddr_em, elf_prot);
> +                if (error == -1) {
> +                    goto exit_mmap;
> +                }
>              }
>  
>              /* Find the full program boundaries.  */
>
Giuseppe Musacchio Dec. 19, 2020, 5:46 p.m. UTC | #7
On 19/12/20 11:55, Laurent Vivier wrote:
> Le 17/12/2020 à 11:17, Giuseppe Musacchio a écrit :
>> Some ELF binaries encode the .bss section as an extension of the data
>> ones by setting the segment p_memsz > p_filesz. Some other binaries take
>> a different route and encode it as a stand-alone PT_LOAD segment with
>> p_filesz = 0 and p_memsz > 0.
>>
>> Both the encodings are actually correct per ELF specification but the
>> ELF loader had some troubles in handling the former: with the old logic
>> it was very likely to get Qemu to crash in zero_bss when trying to
>> access unmapped memory.
>>
>> zero_bss isn't meant to allocate whole zero-filled segments but to
>> "complete" a previously mapped segment with the needed zero bits.
>>
>> The fix is pretty simple, if the segment is completely zero-filled we
>> simply allocate one or more pages (according to p_memsz) and avoid
>> calling zero_bss altogether.
> 
> So, the current code manages the bss segment when the memory page has already
> been allocated for the data segment by zeroing it:
> 
> +----------------------------------+
>  PAGE                              |
>  ----------+------------+          |
>  DATA      |   BSS      |          |
> 
> So your patch fixes the case when there is no data segment and thus no page
> to complete:
> 
> +----------------------------------+
>  PAGE                              |
>  ----------+                       |
>  BSS       |                       |
> 
> 
> But could we have a case where the BSS starts in a page allocated for the
> data segment but needs more pages?
> 
> +----------------------------------+----------------------------------+
>  PAGE                              | PAGE                             |
>  ------------------------+----------------------------+               |
>  DATA                    | BSS                        |               |
> 
> In this case we should also allocate the page, and the previous case is only a
> special case (data segment = 0) of this one.
> 

If I correctly understand your example here this case should be already
handled by zero_bss: the first chunk of .bss is mapped as part of the
vaddr_len mapping and is zeroed by the memset, while the other chunk/page
is mapped in the `host_map_start < host_end` clause.

> so something like (approxymately):
> 
> if (eppnt->p_filesz != 0) {
>    target_mmap()
>    if (vaddr_ef < vaddr_mem) {
>        zero_bss(vaddr_ef, MIN(vaddr_mem, vaddr_ps + vaddr_len))
>    }
> }
> if (vaddr_ps + vaddr_len < vaddr_mem) {
>   target_mmap(vaddr_ps + vaddr_len, vaddr_ps + vaddr_len - vaddr_mem - 1,
>               elf_prot, MAP_PRIVATE | MAP_FIXED | MAP_ANONYMOUS, -1, 0);
> }
> 
> I think your fix is correct, but I'm wondering if we can have something more
> generic, if we can cover an other possible case.
> 
> If you think we don't need to introduce more complexity for a case that can't
> happen I will queue your patch as is.
> 
> Thanks,
> Laurent
> 
>> Signed-off-by: Giuseppe Musacchio <thatlemon@gmail.com>
>> ---
>>  linux-user/elfload.c | 30 ++++++++++++++++++++----------
>>  1 file changed, 20 insertions(+), 10 deletions(-)
>>
>> diff --git a/linux-user/elfload.c b/linux-user/elfload.c
>> index 0b02a92602..a16c240e0f 100644
>> --- a/linux-user/elfload.c
>> +++ b/linux-user/elfload.c
>> @@ -2776,14 +2776,16 @@ static void load_elf_image(const char *image_name, int image_fd,
>>              vaddr = load_bias + eppnt->p_vaddr;
>>              vaddr_po = TARGET_ELF_PAGEOFFSET(vaddr);
>>              vaddr_ps = TARGET_ELF_PAGESTART(vaddr);
>> -            vaddr_len = TARGET_ELF_PAGELENGTH(eppnt->p_filesz + vaddr_po);
>> +
>> +            vaddr_ef = vaddr + eppnt->p_filesz;
>> +            vaddr_em = vaddr + eppnt->p_memsz;
>>  
>>              /*
>> -             * Some segments may be completely empty without any backing file
>> -             * segment, in that case just let zero_bss allocate an empty buffer
>> -             * for it.
>> +             * Some segments may be completely empty, with a non-zero p_memsz
>> +             * but no backing file segment.
>>               */
>>              if (eppnt->p_filesz != 0) {
>> +                vaddr_len = TARGET_ELF_PAGELENGTH(eppnt->p_filesz + vaddr_po);
>>                  error = target_mmap(vaddr_ps, vaddr_len, elf_prot,
>>                                      MAP_PRIVATE | MAP_FIXED,
>>                                      image_fd, eppnt->p_offset - vaddr_po);
>> @@ -2791,14 +2793,22 @@ static void load_elf_image(const char *image_name, int image_fd,
>>                  if (error == -1) {
>>                      goto exit_mmap;
>>                  }
>> -            }
>>  
>> -            vaddr_ef = vaddr + eppnt->p_filesz;
>> -            vaddr_em = vaddr + eppnt->p_memsz;
>> +                /*
>> +                 * If the load segment requests extra zeros (e.g. bss), map it.
>> +                 */
>> +                if (eppnt->p_filesz < eppnt->p_memsz) {
>> +                    zero_bss(vaddr_ef, vaddr_em, elf_prot);
>> +                }
>> +            } else if (eppnt->p_memsz != 0) {
>> +                vaddr_len = TARGET_ELF_PAGELENGTH(eppnt->p_memsz + vaddr_po);
>> +                error = target_mmap(vaddr_ps, vaddr_len, elf_prot,
>> +                                    MAP_PRIVATE | MAP_FIXED | MAP_ANONYMOUS,
>> +                                    -1, 0);
>>  
>> -            /* If the load segment requests extra zeros (e.g. bss), map it.  */
>> -            if (vaddr_ef < vaddr_em) {
>> -                zero_bss(vaddr_ef, vaddr_em, elf_prot);
>> +                if (error == -1) {
>> +                    goto exit_mmap;
>> +                }
>>              }
>>  
>>              /* Find the full program boundaries.  */
>>
>
Laurent Vivier Feb. 13, 2021, 4:41 p.m. UTC | #8
Le 17/12/2020 à 11:17, Giuseppe Musacchio a écrit :
> Some ELF binaries encode the .bss section as an extension of the data
> ones by setting the segment p_memsz > p_filesz. Some other binaries take
> a different route and encode it as a stand-alone PT_LOAD segment with
> p_filesz = 0 and p_memsz > 0.
> 
> Both the encodings are actually correct per ELF specification but the
> ELF loader had some troubles in handling the former: with the old logic
> it was very likely to get Qemu to crash in zero_bss when trying to
> access unmapped memory.
> 
> zero_bss isn't meant to allocate whole zero-filled segments but to
> "complete" a previously mapped segment with the needed zero bits.
> 
> The fix is pretty simple, if the segment is completely zero-filled we
> simply allocate one or more pages (according to p_memsz) and avoid
> calling zero_bss altogether.
> 
> Signed-off-by: Giuseppe Musacchio <thatlemon@gmail.com>
> ---
>  linux-user/elfload.c | 30 ++++++++++++++++++++----------
>  1 file changed, 20 insertions(+), 10 deletions(-)
> 
> diff --git a/linux-user/elfload.c b/linux-user/elfload.c
> index 0b02a92602..a16c240e0f 100644
> --- a/linux-user/elfload.c
> +++ b/linux-user/elfload.c
> @@ -2776,14 +2776,16 @@ static void load_elf_image(const char *image_name, int image_fd,
>              vaddr = load_bias + eppnt->p_vaddr;
>              vaddr_po = TARGET_ELF_PAGEOFFSET(vaddr);
>              vaddr_ps = TARGET_ELF_PAGESTART(vaddr);
> -            vaddr_len = TARGET_ELF_PAGELENGTH(eppnt->p_filesz + vaddr_po);
> +
> +            vaddr_ef = vaddr + eppnt->p_filesz;
> +            vaddr_em = vaddr + eppnt->p_memsz;
>  
>              /*
> -             * Some segments may be completely empty without any backing file
> -             * segment, in that case just let zero_bss allocate an empty buffer
> -             * for it.
> +             * Some segments may be completely empty, with a non-zero p_memsz
> +             * but no backing file segment.
>               */
>              if (eppnt->p_filesz != 0) {
> +                vaddr_len = TARGET_ELF_PAGELENGTH(eppnt->p_filesz + vaddr_po);
>                  error = target_mmap(vaddr_ps, vaddr_len, elf_prot,
>                                      MAP_PRIVATE | MAP_FIXED,
>                                      image_fd, eppnt->p_offset - vaddr_po);
> @@ -2791,14 +2793,22 @@ static void load_elf_image(const char *image_name, int image_fd,
>                  if (error == -1) {
>                      goto exit_mmap;
>                  }
> -            }
>  
> -            vaddr_ef = vaddr + eppnt->p_filesz;
> -            vaddr_em = vaddr + eppnt->p_memsz;
> +                /*
> +                 * If the load segment requests extra zeros (e.g. bss), map it.
> +                 */
> +                if (eppnt->p_filesz < eppnt->p_memsz) {
> +                    zero_bss(vaddr_ef, vaddr_em, elf_prot);
> +                }
> +            } else if (eppnt->p_memsz != 0) {
> +                vaddr_len = TARGET_ELF_PAGELENGTH(eppnt->p_memsz + vaddr_po);
> +                error = target_mmap(vaddr_ps, vaddr_len, elf_prot,
> +                                    MAP_PRIVATE | MAP_FIXED | MAP_ANONYMOUS,
> +                                    -1, 0);
>  
> -            /* If the load segment requests extra zeros (e.g. bss), map it.  */
> -            if (vaddr_ef < vaddr_em) {
> -                zero_bss(vaddr_ef, vaddr_em, elf_prot);
> +                if (error == -1) {
> +                    goto exit_mmap;
> +                }
>              }
>  
>              /* Find the full program boundaries.  */
> 

Applied to my linux-user-for-6.0 branch.

Thanks,
Laurent
diff mbox series

Patch

diff --git a/linux-user/elfload.c b/linux-user/elfload.c
index 0b02a92602..a16c240e0f 100644
--- a/linux-user/elfload.c
+++ b/linux-user/elfload.c
@@ -2776,14 +2776,16 @@  static void load_elf_image(const char *image_name, int image_fd,
             vaddr = load_bias + eppnt->p_vaddr;
             vaddr_po = TARGET_ELF_PAGEOFFSET(vaddr);
             vaddr_ps = TARGET_ELF_PAGESTART(vaddr);
-            vaddr_len = TARGET_ELF_PAGELENGTH(eppnt->p_filesz + vaddr_po);
+
+            vaddr_ef = vaddr + eppnt->p_filesz;
+            vaddr_em = vaddr + eppnt->p_memsz;
 
             /*
-             * Some segments may be completely empty without any backing file
-             * segment, in that case just let zero_bss allocate an empty buffer
-             * for it.
+             * Some segments may be completely empty, with a non-zero p_memsz
+             * but no backing file segment.
              */
             if (eppnt->p_filesz != 0) {
+                vaddr_len = TARGET_ELF_PAGELENGTH(eppnt->p_filesz + vaddr_po);
                 error = target_mmap(vaddr_ps, vaddr_len, elf_prot,
                                     MAP_PRIVATE | MAP_FIXED,
                                     image_fd, eppnt->p_offset - vaddr_po);
@@ -2791,14 +2793,22 @@  static void load_elf_image(const char *image_name, int image_fd,
                 if (error == -1) {
                     goto exit_mmap;
                 }
-            }
 
-            vaddr_ef = vaddr + eppnt->p_filesz;
-            vaddr_em = vaddr + eppnt->p_memsz;
+                /*
+                 * If the load segment requests extra zeros (e.g. bss), map it.
+                 */
+                if (eppnt->p_filesz < eppnt->p_memsz) {
+                    zero_bss(vaddr_ef, vaddr_em, elf_prot);
+                }
+            } else if (eppnt->p_memsz != 0) {
+                vaddr_len = TARGET_ELF_PAGELENGTH(eppnt->p_memsz + vaddr_po);
+                error = target_mmap(vaddr_ps, vaddr_len, elf_prot,
+                                    MAP_PRIVATE | MAP_FIXED | MAP_ANONYMOUS,
+                                    -1, 0);
 
-            /* If the load segment requests extra zeros (e.g. bss), map it.  */
-            if (vaddr_ef < vaddr_em) {
-                zero_bss(vaddr_ef, vaddr_em, elf_prot);
+                if (error == -1) {
+                    goto exit_mmap;
+                }
             }
 
             /* Find the full program boundaries.  */