Message ID | 20210820155211.3153137-1-philmd@redhat.com (mailing list archive) |
---|---|
State | New, archived |
Headers | show |
Series | softmmu/physmem: Improve guest memory allocation failure error message | expand |
On 20.08.21 17:52, Philippe Mathieu-Daudé wrote: > When Linux refuses to overcommit a seriously wild allocation we get: > > $ qemu-system-i386 -m 40000000 > qemu-system-i386: cannot set up guest memory 'pc.ram': Cannot allocate memory > > Slighly improve the error message, displaying the memory size > requested (in case the user didn't expect unspecified memory size > unit is in MiB): > > $ qemu-system-i386 -m 40000000 > qemu-system-i386: Cannot set up 38.1 TiB of guest memory 'pc.ram': Cannot allocate memory > > Reported-by: Bin Meng <bmeng.cn@gmail.com> > Signed-off-by: Philippe Mathieu-Daudé <philmd@redhat.com> > --- > softmmu/physmem.c | 4 +++- > 1 file changed, 3 insertions(+), 1 deletion(-) > > diff --git a/softmmu/physmem.c b/softmmu/physmem.c > index 2e18947598e..2f300a9e79b 100644 > --- a/softmmu/physmem.c > +++ b/softmmu/physmem.c > @@ -1982,8 +1982,10 @@ static void ram_block_add(RAMBlock *new_block, Error **errp) > &new_block->mr->align, > shared, noreserve); > if (!new_block->host) { > + g_autofree char *size_s = size_to_str(new_block->max_length); > error_setg_errno(errp, errno, > - "cannot set up guest memory '%s'", > + "Cannot set up %s of guest memory '%s'", > + size_s, > memory_region_name(new_block->mr)); > qemu_mutex_unlock_ramlist(); > return; > IIRC, ram blocks might not necessarily be used for guest memory ... or is my memory wrong?
On 8/20/21 5:53 PM, David Hildenbrand wrote: > On 20.08.21 17:52, Philippe Mathieu-Daudé wrote: >> When Linux refuses to overcommit a seriously wild allocation we get: >> >> $ qemu-system-i386 -m 40000000 >> qemu-system-i386: cannot set up guest memory 'pc.ram': Cannot >> allocate memory >> >> Slighly improve the error message, displaying the memory size >> requested (in case the user didn't expect unspecified memory size >> unit is in MiB): >> >> $ qemu-system-i386 -m 40000000 >> qemu-system-i386: Cannot set up 38.1 TiB of guest memory 'pc.ram': >> Cannot allocate memory >> >> Reported-by: Bin Meng <bmeng.cn@gmail.com> >> Signed-off-by: Philippe Mathieu-Daudé <philmd@redhat.com> >> --- >> softmmu/physmem.c | 4 +++- >> 1 file changed, 3 insertions(+), 1 deletion(-) >> >> diff --git a/softmmu/physmem.c b/softmmu/physmem.c >> index 2e18947598e..2f300a9e79b 100644 >> --- a/softmmu/physmem.c >> +++ b/softmmu/physmem.c >> @@ -1982,8 +1982,10 @@ static void ram_block_add(RAMBlock *new_block, >> Error **errp) >> >> &new_block->mr->align, >> shared, noreserve); >> if (!new_block->host) { >> + g_autofree char *size_s = >> size_to_str(new_block->max_length); >> error_setg_errno(errp, errno, >> - "cannot set up guest memory '%s'", >> + "Cannot set up %s of guest memory >> '%s'", >> + size_s, >> memory_region_name(new_block->mr)); >> qemu_mutex_unlock_ramlist(); >> return; >> > > IIRC, ram blocks might not necessarily be used for guest memory ... or > is my memory wrong? No clue, this error message was already here. No problem to change s/guest/block/ although.
On Fri, 20 Aug 2021 18:00:26 +0200 Philippe Mathieu-Daudé <philmd@redhat.com> wrote: > On 8/20/21 5:53 PM, David Hildenbrand wrote: > > On 20.08.21 17:52, Philippe Mathieu-Daudé wrote: > >> When Linux refuses to overcommit a seriously wild allocation we get: > >> > >> $ qemu-system-i386 -m 40000000 > >> qemu-system-i386: cannot set up guest memory 'pc.ram': Cannot > >> allocate memory > >> > >> Slighly improve the error message, displaying the memory size > >> requested (in case the user didn't expect unspecified memory size > >> unit is in MiB): > >> > >> $ qemu-system-i386 -m 40000000 > >> qemu-system-i386: Cannot set up 38.1 TiB of guest memory 'pc.ram': > >> Cannot allocate memory > >> > >> Reported-by: Bin Meng <bmeng.cn@gmail.com> > >> Signed-off-by: Philippe Mathieu-Daudé <philmd@redhat.com> > >> --- > >> softmmu/physmem.c | 4 +++- > >> 1 file changed, 3 insertions(+), 1 deletion(-) > >> > >> diff --git a/softmmu/physmem.c b/softmmu/physmem.c > >> index 2e18947598e..2f300a9e79b 100644 > >> --- a/softmmu/physmem.c > >> +++ b/softmmu/physmem.c > >> @@ -1982,8 +1982,10 @@ static void ram_block_add(RAMBlock *new_block, > >> Error **errp) > >> > >> &new_block->mr->align, > >> shared, noreserve); > >> if (!new_block->host) { > >> + g_autofree char *size_s = > >> size_to_str(new_block->max_length); > >> error_setg_errno(errp, errno, > >> - "cannot set up guest memory '%s'", > >> + "Cannot set up %s of guest memory > >> '%s'", > >> + size_s, > >> memory_region_name(new_block->mr)); > >> qemu_mutex_unlock_ramlist(); > >> return; > >> > > > > IIRC, ram blocks might not necessarily be used for guest memory ... or > > is my memory wrong? > > No clue, this error message was already here. it's not only guest RAM, adding size here is marginal improvement, (it won't help much since it's not exact match to CLI which may use suffixes for sizes) > > No problem to change s/guest/block/ although. >
On 8/20/21 6:40 PM, Igor Mammedov wrote: > On Fri, 20 Aug 2021 18:00:26 +0200 > Philippe Mathieu-Daudé <philmd@redhat.com> wrote: > >> On 8/20/21 5:53 PM, David Hildenbrand wrote: >>> On 20.08.21 17:52, Philippe Mathieu-Daudé wrote: >>>> When Linux refuses to overcommit a seriously wild allocation we get: >>>> >>>> $ qemu-system-i386 -m 40000000 >>>> qemu-system-i386: cannot set up guest memory 'pc.ram': Cannot >>>> allocate memory >>>> >>>> Slighly improve the error message, displaying the memory size >>>> requested (in case the user didn't expect unspecified memory size >>>> unit is in MiB): >>>> >>>> $ qemu-system-i386 -m 40000000 >>>> qemu-system-i386: Cannot set up 38.1 TiB of guest memory 'pc.ram': >>>> Cannot allocate memory >>>> >>>> Reported-by: Bin Meng <bmeng.cn@gmail.com> >>>> Signed-off-by: Philippe Mathieu-Daudé <philmd@redhat.com> >>>> --- >>>> softmmu/physmem.c | 4 +++- >>>> 1 file changed, 3 insertions(+), 1 deletion(-) >>>> >>>> diff --git a/softmmu/physmem.c b/softmmu/physmem.c >>>> index 2e18947598e..2f300a9e79b 100644 >>>> --- a/softmmu/physmem.c >>>> +++ b/softmmu/physmem.c >>>> @@ -1982,8 +1982,10 @@ static void ram_block_add(RAMBlock *new_block, >>>> Error **errp) >>>> >>>> &new_block->mr->align, >>>> shared, noreserve); >>>> if (!new_block->host) { >>>> + g_autofree char *size_s = >>>> size_to_str(new_block->max_length); >>>> error_setg_errno(errp, errno, >>>> - "cannot set up guest memory '%s'", >>>> + "Cannot set up %s of guest memory >>>> '%s'", >>>> + size_s, >>>> memory_region_name(new_block->mr)); >>>> qemu_mutex_unlock_ramlist(); >>>> return; >>>> >>> >>> IIRC, ram blocks might not necessarily be used for guest memory ... or >>> is my memory wrong? >> >> No clue, this error message was already here. > > it's not only guest RAM, adding size here is marginal improvement, > (it won't help much since it's not exact match to CLI which may use suffixes for sizes) The suffixed size is already converted at this point: qemu-system-i386 -m 2T qemu-system-i386: Cannot set up 2 TiB of guest memory 'pc.ram': Cannot allocate memory I agree however the size displayed might be less than the size passed to the '-m' argument. Anyhow I still see the size displayed in the error message as an useful hint: $ qemu-system-i386 -m 64000 qemu-system-i386: cannot set up guest memory 'pc.ram': Cannot allocate memory VS: $ qemu-system-i386 -m 64000 qemu-system-i386: Cannot set up 62.5 GiB of guest memory 'pc.ram': Cannot allocate memory > >> >> No problem to change s/guest/block/ although. >> >
On Fri, 20 Aug 2021 at 17:59, Philippe Mathieu-Daudé <philmd@redhat.com> wrote: > Anyhow I still see the size displayed in the error message as an > useful hint: > > $ qemu-system-i386 -m 64000 > qemu-system-i386: cannot set up guest memory 'pc.ram': Cannot allocate > memory > > VS: > > $ qemu-system-i386 -m 64000 > qemu-system-i386: Cannot set up 62.5 GiB of guest memory 'pc.ram': > Cannot allocate memory I hadn't spotted that we were doing the size-to-str -- I think that's definitely helpful because it will catch cases like the one here where the user didn't realize they were asking for 30 terabytes of RAM... -- PMM
On Fri, Aug 20, 2021 at 05:52:11PM +0200, Philippe Mathieu-Daudé wrote: > When Linux refuses to overcommit a seriously wild allocation we get: > > $ qemu-system-i386 -m 40000000 > qemu-system-i386: cannot set up guest memory 'pc.ram': Cannot allocate memory > > Slighly improve the error message, displaying the memory size > requested (in case the user didn't expect unspecified memory size > unit is in MiB): > > $ qemu-system-i386 -m 40000000 > qemu-system-i386: Cannot set up 38.1 TiB of guest memory 'pc.ram': Cannot allocate memory > > Reported-by: Bin Meng <bmeng.cn@gmail.com> > Signed-off-by: Philippe Mathieu-Daudé <philmd@redhat.com> Reviewed-by: Peter Xu <peterx@redhat.com>
On Fri, Aug 20, 2021 at 11:52 PM Philippe Mathieu-Daudé <philmd@redhat.com> wrote: > > When Linux refuses to overcommit a seriously wild allocation we get: > > $ qemu-system-i386 -m 40000000 > qemu-system-i386: cannot set up guest memory 'pc.ram': Cannot allocate memory > > Slighly improve the error message, displaying the memory size typo: Slightly > requested (in case the user didn't expect unspecified memory size > unit is in MiB): > > $ qemu-system-i386 -m 40000000 > qemu-system-i386: Cannot set up 38.1 TiB of guest memory 'pc.ram': Cannot allocate memory > > Reported-by: Bin Meng <bmeng.cn@gmail.com> > Signed-off-by: Philippe Mathieu-Daudé <philmd@redhat.com> > --- > softmmu/physmem.c | 4 +++- > 1 file changed, 3 insertions(+), 1 deletion(-) > > diff --git a/softmmu/physmem.c b/softmmu/physmem.c > index 2e18947598e..2f300a9e79b 100644 > --- a/softmmu/physmem.c > +++ b/softmmu/physmem.c > @@ -1982,8 +1982,10 @@ static void ram_block_add(RAMBlock *new_block, Error **errp) > &new_block->mr->align, > shared, noreserve); > if (!new_block->host) { > + g_autofree char *size_s = size_to_str(new_block->max_length); Does g_autofree work with every compiler we support? Looks it only applies to GCC and clang? https://www.gitmemory.com/issue/linuxwacom/libwacom/142/518787578 > error_setg_errno(errp, errno, > - "cannot set up guest memory '%s'", > + "Cannot set up %s of guest memory '%s'", > + size_s, Nice improvement! > memory_region_name(new_block->mr)); > qemu_mutex_unlock_ramlist(); > return; Tested-by: Bin Meng <bmeng.cn@gmail.com>
On Sat, 21 Aug 2021 at 11:03, Bin Meng <bmeng.cn@gmail.com> wrote: > Does g_autofree work with every compiler we support? Yes. We use it extensively: $ git grep g_autofree |wc -l 329 > Looks it only applies to GCC and clang? > https://www.gitmemory.com/issue/linuxwacom/libwacom/142/518787578 Those are the only two compilers we support :-) -- PMM
On 8/21/21 12:01 PM, Bin Meng wrote: > On Fri, Aug 20, 2021 at 11:52 PM Philippe Mathieu-Daudé > <philmd@redhat.com> wrote: >> >> When Linux refuses to overcommit a seriously wild allocation we get: >> >> $ qemu-system-i386 -m 40000000 >> qemu-system-i386: cannot set up guest memory 'pc.ram': Cannot allocate memory >> >> Slighly improve the error message, displaying the memory size > > typo: Slightly Oops. >> if (!new_block->host) { >> + g_autofree char *size_s = size_to_str(new_block->max_length); > > Does g_autofree work with every compiler we support? > > Looks it only applies to GCC and clang? > https://www.gitmemory.com/issue/linuxwacom/libwacom/142/518787578 Which are the only two supported by the project AFAIK. g_autofree depends on glib, minimum available since commit 00f2cfbbec6 ("glib: bump min required glib library version to 2.48"). Merged here: commit 3590b27c7a2be7a24b4b265 Merge: d013d220c71 57b9f113fce Date: Thu Aug 22 17:57:09 2019 +0100 Merge remote-tracking branch 'remotes/berrange/tags/autofree-pull-request' into staging require newer glib2 to enable autofree'ing of stack variables exiting scope > Tested-by: Bin Meng <bmeng.cn@gmail.com> Thanks!
On 20.08.21 18:00, Philippe Mathieu-Daudé wrote: > On 8/20/21 5:53 PM, David Hildenbrand wrote: >> On 20.08.21 17:52, Philippe Mathieu-Daudé wrote: >>> When Linux refuses to overcommit a seriously wild allocation we get: >>> >>> $ qemu-system-i386 -m 40000000 >>> qemu-system-i386: cannot set up guest memory 'pc.ram': Cannot >>> allocate memory >>> >>> Slighly improve the error message, displaying the memory size >>> requested (in case the user didn't expect unspecified memory size >>> unit is in MiB): >>> >>> $ qemu-system-i386 -m 40000000 >>> qemu-system-i386: Cannot set up 38.1 TiB of guest memory 'pc.ram': >>> Cannot allocate memory >>> >>> Reported-by: Bin Meng <bmeng.cn@gmail.com> >>> Signed-off-by: Philippe Mathieu-Daudé <philmd@redhat.com> >>> --- >>> softmmu/physmem.c | 4 +++- >>> 1 file changed, 3 insertions(+), 1 deletion(-) >>> >>> diff --git a/softmmu/physmem.c b/softmmu/physmem.c >>> index 2e18947598e..2f300a9e79b 100644 >>> --- a/softmmu/physmem.c >>> +++ b/softmmu/physmem.c >>> @@ -1982,8 +1982,10 @@ static void ram_block_add(RAMBlock *new_block, >>> Error **errp) >>> >>> &new_block->mr->align, >>> shared, noreserve); >>> if (!new_block->host) { >>> + g_autofree char *size_s = >>> size_to_str(new_block->max_length); >>> error_setg_errno(errp, errno, >>> - "cannot set up guest memory '%s'", >>> + "Cannot set up %s of guest memory >>> '%s'", >>> + size_s, >>> memory_region_name(new_block->mr)); >>> qemu_mutex_unlock_ramlist(); >>> return; >>> >> >> IIRC, ram blocks might not necessarily be used for guest memory ... or >> is my memory wrong? > > No clue, this error message was already here. > > No problem to change s/guest/block/ although. We should probably just adjust that as well (separate patch) ... but your patch subject also mentions "guest memory". Not opposed to printing the size, although I doubt that it will really stop similar questions/problems getting raised.
On Mon, 23 Aug 2021 at 09:40, David Hildenbrand <david@redhat.com> wrote: > Not opposed to printing the size, although I doubt that it will really > stop similar questions/problems getting raised. The case that triggered this was somebody thinking -m took a byte count, so very likely that an error message saying "you tried to allocate 38TB" would have made their mistake clear in a way that just "allocation failed" did not. It also means that if a future user asks us for help then we can look at the error message and immediately tell them the problem, rather than going "hmm, what are all the possible ways that allocation might have failed" and going off down rabbitholes like VM overcommit settings... -- PMM
On 23.08.21 11:23, Peter Maydell wrote: > On Mon, 23 Aug 2021 at 09:40, David Hildenbrand <david@redhat.com> wrote: >> Not opposed to printing the size, although I doubt that it will really >> stop similar questions/problems getting raised. > > The case that triggered this was somebody thinking > -m took a byte count, so very likely that an error message > saying "you tried to allocate 38TB" would have made their > mistake clear in a way that just "allocation failed" did not. > It also means that if a future user asks us for help then > we can look at the error message and immediately tell them > the problem, rather than going "hmm, what are all the possible > ways that allocation might have failed" and going off down > rabbitholes like VM overcommit settings... We've had similar issues recently where Linux memory overcommit handling rejected the allocation -- and the user was well aware about the actual size. You won't be able to catch such reports, because people don't understand how Linux memory overcommit handling works or was configured. "I have 3 GiB of free memory, why can't I create a 3 GiB VM". "I have 3 GiB of RAM, why can't I create a 3 GiB VM even if it won't make use of all 3 GiB of memory". Thus my comment, it will only stop very basic usage issues. And I agree that looking at the error *might* help. It didn't help for the cases I just described, because we need much more system information to make a guess what the user error actually is.
On 8/23/21 11:29 AM, David Hildenbrand wrote: > On 23.08.21 11:23, Peter Maydell wrote: >> On Mon, 23 Aug 2021 at 09:40, David Hildenbrand <david@redhat.com> wrote: >>> Not opposed to printing the size, although I doubt that it will really >>> stop similar questions/problems getting raised. >> >> The case that triggered this was somebody thinking >> -m took a byte count, so very likely that an error message >> saying "you tried to allocate 38TB" would have made their >> mistake clear in a way that just "allocation failed" did not. >> It also means that if a future user asks us for help then >> we can look at the error message and immediately tell them >> the problem, rather than going "hmm, what are all the possible >> ways that allocation might have failed" and going off down >> rabbitholes like VM overcommit settings... > > We've had similar issues recently where Linux memory overcommit handling > rejected the allocation -- and the user was well aware about the actual > size. You won't be able to catch such reports, because people don't > understand how Linux memory overcommit handling works or was configured. > > "I have 3 GiB of free memory, why can't I create a 3 GiB VM". "I have 3 > GiB of RAM, why can't I create a 3 GiB VM even if it won't make use of > all 3 GiB of memory". > > Thus my comment, it will only stop very basic usage issues. And I agree > that looking at the error *might* help. It didn't help for the cases I > just described, because we need much more system information to make a > guess what the user error actually is. Is it possible to get the maximal overcommitable amount on Linux?
On 23.08.21 12:12, Philippe Mathieu-Daudé wrote: > On 8/23/21 11:29 AM, David Hildenbrand wrote: >> On 23.08.21 11:23, Peter Maydell wrote: >>> On Mon, 23 Aug 2021 at 09:40, David Hildenbrand <david@redhat.com> wrote: >>>> Not opposed to printing the size, although I doubt that it will really >>>> stop similar questions/problems getting raised. >>> >>> The case that triggered this was somebody thinking >>> -m took a byte count, so very likely that an error message >>> saying "you tried to allocate 38TB" would have made their >>> mistake clear in a way that just "allocation failed" did not. >>> It also means that if a future user asks us for help then >>> we can look at the error message and immediately tell them >>> the problem, rather than going "hmm, what are all the possible >>> ways that allocation might have failed" and going off down >>> rabbitholes like VM overcommit settings... >> >> We've had similar issues recently where Linux memory overcommit handling >> rejected the allocation -- and the user was well aware about the actual >> size. You won't be able to catch such reports, because people don't >> understand how Linux memory overcommit handling works or was configured. >> >> "I have 3 GiB of free memory, why can't I create a 3 GiB VM". "I have 3 >> GiB of RAM, why can't I create a 3 GiB VM even if it won't make use of >> all 3 GiB of memory". >> >> Thus my comment, it will only stop very basic usage issues. And I agree >> that looking at the error *might* help. It didn't help for the cases I >> just described, because we need much more system information to make a >> guess what the user error actually is. > > Is it possible to get the maximal overcommitable amount on Linux? Not reliably I think. In the "always" mode, there is none. In the "guess"/"estimate" mode, the kernel takes a guess (currently implemented as checking if the mmap size <= total RAM + total SWAP). Committable = MemTotal + SwapTotal In the "never" mode: Committable = CommitLimit - Committed_AS However, the value gets further reduced for !root applications by /proc/sys/vm/admin_reserve_kbytes. Replicating these calculations in user space would be suboptimal IMHO.
On 8/23/21 12:24 PM, David Hildenbrand wrote: > On 23.08.21 12:12, Philippe Mathieu-Daudé wrote: >> On 8/23/21 11:29 AM, David Hildenbrand wrote: >>> On 23.08.21 11:23, Peter Maydell wrote: >>>> On Mon, 23 Aug 2021 at 09:40, David Hildenbrand <david@redhat.com> >>>> wrote: >>>>> Not opposed to printing the size, although I doubt that it will really >>>>> stop similar questions/problems getting raised. >>>> >>>> The case that triggered this was somebody thinking >>>> -m took a byte count, so very likely that an error message >>>> saying "you tried to allocate 38TB" would have made their >>>> mistake clear in a way that just "allocation failed" did not. >>>> It also means that if a future user asks us for help then >>>> we can look at the error message and immediately tell them >>>> the problem, rather than going "hmm, what are all the possible >>>> ways that allocation might have failed" and going off down >>>> rabbitholes like VM overcommit settings... >>> >>> We've had similar issues recently where Linux memory overcommit handling >>> rejected the allocation -- and the user was well aware about the actual >>> size. You won't be able to catch such reports, because people don't >>> understand how Linux memory overcommit handling works or was configured. >>> >>> "I have 3 GiB of free memory, why can't I create a 3 GiB VM". "I have 3 >>> GiB of RAM, why can't I create a 3 GiB VM even if it won't make use of >>> all 3 GiB of memory". >>> >>> Thus my comment, it will only stop very basic usage issues. And I agree >>> that looking at the error *might* help. It didn't help for the cases I >>> just described, because we need much more system information to make a >>> guess what the user error actually is. >> >> Is it possible to get the maximal overcommitable amount on Linux? > > Not reliably I think. > > In the "always" mode, there is none. > > In the "guess"/"estimate" mode, the kernel takes a guess (currently > implemented as checking if the mmap size <= total RAM + total SWAP). > Committable = MemTotal + SwapTotal > > In the "never" mode: > Committable = CommitLimit - Committed_AS > However, the value gets further reduced for !root applications by > /proc/sys/vm/admin_reserve_kbytes. > > Replicating these calculations in user space would be suboptimal IMHO. What about simply giving a hint about memory overcommit and display a link to documentation with longer description about how to check and figure out this issue?
On 23.08.21 12:34, Philippe Mathieu-Daudé wrote: > On 8/23/21 12:24 PM, David Hildenbrand wrote: >> On 23.08.21 12:12, Philippe Mathieu-Daudé wrote: >>> On 8/23/21 11:29 AM, David Hildenbrand wrote: >>>> On 23.08.21 11:23, Peter Maydell wrote: >>>>> On Mon, 23 Aug 2021 at 09:40, David Hildenbrand <david@redhat.com> >>>>> wrote: >>>>>> Not opposed to printing the size, although I doubt that it will really >>>>>> stop similar questions/problems getting raised. >>>>> >>>>> The case that triggered this was somebody thinking >>>>> -m took a byte count, so very likely that an error message >>>>> saying "you tried to allocate 38TB" would have made their >>>>> mistake clear in a way that just "allocation failed" did not. >>>>> It also means that if a future user asks us for help then >>>>> we can look at the error message and immediately tell them >>>>> the problem, rather than going "hmm, what are all the possible >>>>> ways that allocation might have failed" and going off down >>>>> rabbitholes like VM overcommit settings... >>>> >>>> We've had similar issues recently where Linux memory overcommit handling >>>> rejected the allocation -- and the user was well aware about the actual >>>> size. You won't be able to catch such reports, because people don't >>>> understand how Linux memory overcommit handling works or was configured. >>>> >>>> "I have 3 GiB of free memory, why can't I create a 3 GiB VM". "I have 3 >>>> GiB of RAM, why can't I create a 3 GiB VM even if it won't make use of >>>> all 3 GiB of memory". >>>> >>>> Thus my comment, it will only stop very basic usage issues. And I agree >>>> that looking at the error *might* help. It didn't help for the cases I >>>> just described, because we need much more system information to make a >>>> guess what the user error actually is. >>> >>> Is it possible to get the maximal overcommitable amount on Linux? >> >> Not reliably I think. >> >> In the "always" mode, there is none. >> >> In the "guess"/"estimate" mode, the kernel takes a guess (currently >> implemented as checking if the mmap size <= total RAM + total SWAP). >> Committable = MemTotal + SwapTotal >> >> In the "never" mode: >> Committable = CommitLimit - Committed_AS >> However, the value gets further reduced for !root applications by >> /proc/sys/vm/admin_reserve_kbytes. >> >> Replicating these calculations in user space would be suboptimal IMHO. > > What about simply giving a hint about memory overcommit and display > a link to documentation with longer description about how to check > and figure out this issue? That would be highly OS-specific -- for example, there is no memory overcommit under Windows. Sure, we could add a Linux specific hint, indication documentation. But I'm not sure if most end users stumbling into such an error+hint would be able to make sense of memory overcommit details (not to mention that they know what it even is) :) You can run into memory allocation issues with many applications. Let me give you a simple example t480s: ~ $ dd if=/dev/zero of=/dev/null ibs=100G dd: memory exhausted by input buffer of size 107374182400 bytes (100 GiB) So indicating the size of the failing allocation might be just good enough. For the other parts it's usually just "the way the OS was configured, it does not think it can allow this allocation".
* David Hildenbrand (david@redhat.com) wrote: > On 23.08.21 12:34, Philippe Mathieu-Daudé wrote: > > On 8/23/21 12:24 PM, David Hildenbrand wrote: > > > On 23.08.21 12:12, Philippe Mathieu-Daudé wrote: > > > > On 8/23/21 11:29 AM, David Hildenbrand wrote: > > > > > On 23.08.21 11:23, Peter Maydell wrote: > > > > > > On Mon, 23 Aug 2021 at 09:40, David Hildenbrand <david@redhat.com> > > > > > > wrote: > > > > > > > Not opposed to printing the size, although I doubt that it will really > > > > > > > stop similar questions/problems getting raised. > > > > > > > > > > > > The case that triggered this was somebody thinking > > > > > > -m took a byte count, so very likely that an error message > > > > > > saying "you tried to allocate 38TB" would have made their > > > > > > mistake clear in a way that just "allocation failed" did not. > > > > > > It also means that if a future user asks us for help then > > > > > > we can look at the error message and immediately tell them > > > > > > the problem, rather than going "hmm, what are all the possible > > > > > > ways that allocation might have failed" and going off down > > > > > > rabbitholes like VM overcommit settings... > > > > > > > > > > We've had similar issues recently where Linux memory overcommit handling > > > > > rejected the allocation -- and the user was well aware about the actual > > > > > size. You won't be able to catch such reports, because people don't > > > > > understand how Linux memory overcommit handling works or was configured. > > > > > > > > > > "I have 3 GiB of free memory, why can't I create a 3 GiB VM". "I have 3 > > > > > GiB of RAM, why can't I create a 3 GiB VM even if it won't make use of > > > > > all 3 GiB of memory". > > > > > > > > > > Thus my comment, it will only stop very basic usage issues. And I agree > > > > > that looking at the error *might* help. It didn't help for the cases I > > > > > just described, because we need much more system information to make a > > > > > guess what the user error actually is. > > > > > > > > Is it possible to get the maximal overcommitable amount on Linux? > > > > > > Not reliably I think. > > > > > > In the "always" mode, there is none. > > > > > > In the "guess"/"estimate" mode, the kernel takes a guess (currently > > > implemented as checking if the mmap size <= total RAM + total SWAP). > > > Committable = MemTotal + SwapTotal > > > > > > In the "never" mode: > > > Committable = CommitLimit - Committed_AS > > > However, the value gets further reduced for !root applications by > > > /proc/sys/vm/admin_reserve_kbytes. > > > > > > Replicating these calculations in user space would be suboptimal IMHO. > > > > What about simply giving a hint about memory overcommit and display > > a link to documentation with longer description about how to check > > and figure out this issue? > > That would be highly OS-specific -- for example, there is no memory > overcommit under Windows. Sure, we could add a Linux specific hint, > indication documentation. But I'm not sure if most end users stumbling into > such an error+hint would be able to make sense of memory overcommit details > (not to mention that they know what it even is) :) > > You can run into memory allocation issues with many applications. Let me > give you a simple example > > t480s: ~ $ dd if=/dev/zero of=/dev/null ibs=100G > dd: memory exhausted by input buffer of size 107374182400 bytes (100 GiB) > > So indicating the size of the failing allocation might be just good enough. > For the other parts it's usually just "the way the OS was configured, it > does not think it can allow this allocation". Does it also get complicated by the use of CGroup? Dave > -- > Thanks, > > David / dhildenb >
On 24.08.21 10:37, Dr. David Alan Gilbert wrote: > * David Hildenbrand (david@redhat.com) wrote: >> On 23.08.21 12:34, Philippe Mathieu-Daudé wrote: >>> On 8/23/21 12:24 PM, David Hildenbrand wrote: >>>> On 23.08.21 12:12, Philippe Mathieu-Daudé wrote: >>>>> On 8/23/21 11:29 AM, David Hildenbrand wrote: >>>>>> On 23.08.21 11:23, Peter Maydell wrote: >>>>>>> On Mon, 23 Aug 2021 at 09:40, David Hildenbrand <david@redhat.com> >>>>>>> wrote: >>>>>>>> Not opposed to printing the size, although I doubt that it will really >>>>>>>> stop similar questions/problems getting raised. >>>>>>> >>>>>>> The case that triggered this was somebody thinking >>>>>>> -m took a byte count, so very likely that an error message >>>>>>> saying "you tried to allocate 38TB" would have made their >>>>>>> mistake clear in a way that just "allocation failed" did not. >>>>>>> It also means that if a future user asks us for help then >>>>>>> we can look at the error message and immediately tell them >>>>>>> the problem, rather than going "hmm, what are all the possible >>>>>>> ways that allocation might have failed" and going off down >>>>>>> rabbitholes like VM overcommit settings... >>>>>> >>>>>> We've had similar issues recently where Linux memory overcommit handling >>>>>> rejected the allocation -- and the user was well aware about the actual >>>>>> size. You won't be able to catch such reports, because people don't >>>>>> understand how Linux memory overcommit handling works or was configured. >>>>>> >>>>>> "I have 3 GiB of free memory, why can't I create a 3 GiB VM". "I have 3 >>>>>> GiB of RAM, why can't I create a 3 GiB VM even if it won't make use of >>>>>> all 3 GiB of memory". >>>>>> >>>>>> Thus my comment, it will only stop very basic usage issues. And I agree >>>>>> that looking at the error *might* help. It didn't help for the cases I >>>>>> just described, because we need much more system information to make a >>>>>> guess what the user error actually is. >>>>> >>>>> Is it possible to get the maximal overcommitable amount on Linux? >>>> >>>> Not reliably I think. >>>> >>>> In the "always" mode, there is none. >>>> >>>> In the "guess"/"estimate" mode, the kernel takes a guess (currently >>>> implemented as checking if the mmap size <= total RAM + total SWAP). >>>> Committable = MemTotal + SwapTotal >>>> >>>> In the "never" mode: >>>> Committable = CommitLimit - Committed_AS >>>> However, the value gets further reduced for !root applications by >>>> /proc/sys/vm/admin_reserve_kbytes. >>>> >>>> Replicating these calculations in user space would be suboptimal IMHO. >>> >>> What about simply giving a hint about memory overcommit and display >>> a link to documentation with longer description about how to check >>> and figure out this issue? >> >> That would be highly OS-specific -- for example, there is no memory >> overcommit under Windows. Sure, we could add a Linux specific hint, >> indication documentation. But I'm not sure if most end users stumbling into >> such an error+hint would be able to make sense of memory overcommit details >> (not to mention that they know what it even is) :) >> >> You can run into memory allocation issues with many applications. Let me >> give you a simple example >> >> t480s: ~ $ dd if=/dev/zero of=/dev/null ibs=100G >> dd: memory exhausted by input buffer of size 107374182400 bytes (100 GiB) >> >> So indicating the size of the failing allocation might be just good enough. >> For the other parts it's usually just "the way the OS was configured, it >> does not think it can allow this allocation". > > Does it also get complicated by the use of CGroup? Not in terms of memory overcommit AFAIU. cgroups only control actually memory consumption, not mmap() creation. > > Dave
On Tue, 24 Aug 2021 09:37:54 +0100 "Dr. David Alan Gilbert" <dgilbert@redhat.com> wrote: > * David Hildenbrand (david@redhat.com) wrote: > > On 23.08.21 12:34, Philippe Mathieu-Daudé wrote: > > > On 8/23/21 12:24 PM, David Hildenbrand wrote: > > > > On 23.08.21 12:12, Philippe Mathieu-Daudé wrote: > > > > > On 8/23/21 11:29 AM, David Hildenbrand wrote: > > > > > > On 23.08.21 11:23, Peter Maydell wrote: > > > > > > > On Mon, 23 Aug 2021 at 09:40, David Hildenbrand <david@redhat.com> > > > > > > > wrote: > > > > > > > > Not opposed to printing the size, although I doubt that it will really > > > > > > > > stop similar questions/problems getting raised. > > > > > > > > > > > > > > The case that triggered this was somebody thinking > > > > > > > -m took a byte count, so very likely that an error message > > > > > > > saying "you tried to allocate 38TB" would have made their > > > > > > > mistake clear in a way that just "allocation failed" did not. > > > > > > > It also means that if a future user asks us for help then > > > > > > > we can look at the error message and immediately tell them > > > > > > > the problem, rather than going "hmm, what are all the possible > > > > > > > ways that allocation might have failed" and going off down > > > > > > > rabbitholes like VM overcommit settings... > > > > > > > > > > > > We've had similar issues recently where Linux memory overcommit handling > > > > > > rejected the allocation -- and the user was well aware about the actual > > > > > > size. You won't be able to catch such reports, because people don't > > > > > > understand how Linux memory overcommit handling works or was configured. > > > > > > > > > > > > "I have 3 GiB of free memory, why can't I create a 3 GiB VM". "I have 3 > > > > > > GiB of RAM, why can't I create a 3 GiB VM even if it won't make use of > > > > > > all 3 GiB of memory". > > > > > > > > > > > > Thus my comment, it will only stop very basic usage issues. And I agree > > > > > > that looking at the error *might* help. It didn't help for the cases I > > > > > > just described, because we need much more system information to make a > > > > > > guess what the user error actually is. > > > > > > > > > > Is it possible to get the maximal overcommitable amount on Linux? > > > > > > > > Not reliably I think. > > > > > > > > In the "always" mode, there is none. > > > > > > > > In the "guess"/"estimate" mode, the kernel takes a guess (currently > > > > implemented as checking if the mmap size <= total RAM + total SWAP). > > > > Committable = MemTotal + SwapTotal > > > > > > > > In the "never" mode: > > > > Committable = CommitLimit - Committed_AS > > > > However, the value gets further reduced for !root applications by > > > > /proc/sys/vm/admin_reserve_kbytes. > > > > > > > > Replicating these calculations in user space would be suboptimal IMHO. > > > > > > What about simply giving a hint about memory overcommit and display > > > a link to documentation with longer description about how to check > > > and figure out this issue? > > > > That would be highly OS-specific -- for example, there is no memory > > overcommit under Windows. Sure, we could add a Linux specific hint, > > indication documentation. But I'm not sure if most end users stumbling into > > such an error+hint would be able to make sense of memory overcommit details > > (not to mention that they know what it even is) :) > > > > You can run into memory allocation issues with many applications. Let me > > give you a simple example > > > > t480s: ~ $ dd if=/dev/zero of=/dev/null ibs=100G > > dd: memory exhausted by input buffer of size 107374182400 bytes (100 GiB) > > > > So indicating the size of the failing allocation might be just good enough. > > For the other parts it's usually just "the way the OS was configured, it > > does not think it can allow this allocation". > > Does it also get complicated by the use of CGroup? And if it's not complex enough, add to that NUMA node binding, which introduces additional limitations on RAM size that can be allocated. > Dave > > > -- > > Thanks, > > > > David / dhildenb > >
diff --git a/softmmu/physmem.c b/softmmu/physmem.c index 2e18947598e..2f300a9e79b 100644 --- a/softmmu/physmem.c +++ b/softmmu/physmem.c @@ -1982,8 +1982,10 @@ static void ram_block_add(RAMBlock *new_block, Error **errp) &new_block->mr->align, shared, noreserve); if (!new_block->host) { + g_autofree char *size_s = size_to_str(new_block->max_length); error_setg_errno(errp, errno, - "cannot set up guest memory '%s'", + "Cannot set up %s of guest memory '%s'", + size_s, memory_region_name(new_block->mr)); qemu_mutex_unlock_ramlist(); return;
When Linux refuses to overcommit a seriously wild allocation we get: $ qemu-system-i386 -m 40000000 qemu-system-i386: cannot set up guest memory 'pc.ram': Cannot allocate memory Slighly improve the error message, displaying the memory size requested (in case the user didn't expect unspecified memory size unit is in MiB): $ qemu-system-i386 -m 40000000 qemu-system-i386: Cannot set up 38.1 TiB of guest memory 'pc.ram': Cannot allocate memory Reported-by: Bin Meng <bmeng.cn@gmail.com> Signed-off-by: Philippe Mathieu-Daudé <philmd@redhat.com> --- softmmu/physmem.c | 4 +++- 1 file changed, 3 insertions(+), 1 deletion(-)