diff mbox series

[v1] proc/vmcore: don't fake reading zeroes on surprise vmcore_cb unregistration

Message ID 20211111192243.22002-1-david@redhat.com (mailing list archive)
State New
Headers show
Series [v1] proc/vmcore: don't fake reading zeroes on surprise vmcore_cb unregistration | expand

Commit Message

David Hildenbrand Nov. 11, 2021, 7:22 p.m. UTC
In commit cc5f2704c934 ("proc/vmcore: convert oldmem_pfn_is_ram callback
to more generic vmcore callbacks"), we added detection of surprise
vmcore_cb unregistration after the vmcore was already opened. Once
detected, we warn the user and simulate reading zeroes from that point on
when accessing the vmcore.

The basic reason was that unexpected unregistration, for example, by
manually unbinding a driver from a device after opening the
vmcore, is not supported and could result in reading oldmem the
vmcore_cb would have actually prohibited while registered. However,
something like that can similarly be trigger by a user that's really
looking for trouble simply by unbinding the relevant driver before opening
the vmcore -- or by disallowing loading the driver in the first place.
So it's actually of limited help.

Currently, unregistration can only be triggered via virtio-mem when
manually unbinding the driver from the device inside the VM; there is no
way to trigger it from the hypervisor, as hypervisors don't allow for
unplugging virtio-mem devices -- ripping out system RAM from a VM without
coordination with the guest is usually not a good idea.

The important part is that unbinding the driver and unregistering the
vmcore_cb while concurrently reading the vmcore won't crash the system,
and that is handled by the rwsem.

To make the mechanism more future proof, let's remove the "read zero"
part, but leave the warning in place. For example, we could have a future
driver (like virtio-balloon) that will contact the hypervisor to figure out
if we already populated a page for a given PFN. Hotunplugging such a device
and consequently unregistering the vmcore_cb could be triggered from the
hypervisor without harming the system even while kdump is running. In that
case, we don't want to silently end up with a vmcore that contains wrong
data, because the user inside the VM might be unaware of the hypervisor
action and might easily miss the warning in the log.

Cc: Dave Young <dyoung@redhat.com>
Cc: Baoquan He <bhe@redhat.com>
Cc: Vivek Goyal <vgoyal@redhat.com>
Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: Philipp Rudo <prudo@redhat.com>
Cc: kexec@lists.infradead.org
Cc: linux-mm@kvack.org
Cc: linux-fsdevel@vger.kernel.org
Signed-off-by: David Hildenbrand <david@redhat.com>
---
 fs/proc/vmcore.c | 10 ++--------
 1 file changed, 2 insertions(+), 8 deletions(-)


base-commit: debe436e77c72fcee804fb867f275e6d31aa999c

Comments

Baoquan He Nov. 12, 2021, 3:30 a.m. UTC | #1
On 11/11/21 at 08:22pm, David Hildenbrand wrote:
> In commit cc5f2704c934 ("proc/vmcore: convert oldmem_pfn_is_ram callback
> to more generic vmcore callbacks"), we added detection of surprise
> vmcore_cb unregistration after the vmcore was already opened. Once
> detected, we warn the user and simulate reading zeroes from that point on
> when accessing the vmcore.
> 
> The basic reason was that unexpected unregistration, for example, by
> manually unbinding a driver from a device after opening the
> vmcore, is not supported and could result in reading oldmem the
> vmcore_cb would have actually prohibited while registered. However,
> something like that can similarly be trigger by a user that's really
> looking for trouble simply by unbinding the relevant driver before opening
> the vmcore -- or by disallowing loading the driver in the first place.
> So it's actually of limited help.

Yes, this is the change what I would like to see in the original patch
"proc/vmcore: convert oldmem_pfn_is_ram callback to more generic vmcore callbacks".
I am happy with this patch appended to commit cc5f2704c934.

> 
> Currently, unregistration can only be triggered via virtio-mem when
> manually unbinding the driver from the device inside the VM; there is no
> way to trigger it from the hypervisor, as hypervisors don't allow for
> unplugging virtio-mem devices -- ripping out system RAM from a VM without
> coordination with the guest is usually not a good idea.
> 
> The important part is that unbinding the driver and unregistering the
> vmcore_cb while concurrently reading the vmcore won't crash the system,
> and that is handled by the rwsem.
> 
> To make the mechanism more future proof, let's remove the "read zero"
> part, but leave the warning in place. For example, we could have a future
> driver (like virtio-balloon) that will contact the hypervisor to figure out
> if we already populated a page for a given PFN. Hotunplugging such a device
> and consequently unregistering the vmcore_cb could be triggered from the
> hypervisor without harming the system even while kdump is running. In that

I am a little confused, could you tell more about "contact the hypervisor to
figure out if we already populated a page for a given PFN."? I think
vmcore dumping relies on the eflcorehdr which is created beforehand, and
relies on vmcore_cb registered in advance too, virtio-balloon could take
another way to interact with kdump to make sure the dumpable ram?

Nevertheless, this patch looks good to me, thanks.

Acked-by: Baoquan He <bhe@redhat.com>

> case, we don't want to silently end up with a vmcore that contains wrong
> data, because the user inside the VM might be unaware of the hypervisor
> action and might easily miss the warning in the log.
> 
> Cc: Dave Young <dyoung@redhat.com>
> Cc: Baoquan He <bhe@redhat.com>
> Cc: Vivek Goyal <vgoyal@redhat.com>
> Cc: Andrew Morton <akpm@linux-foundation.org>
> Cc: Philipp Rudo <prudo@redhat.com>
> Cc: kexec@lists.infradead.org
> Cc: linux-mm@kvack.org
> Cc: linux-fsdevel@vger.kernel.org
> Signed-off-by: David Hildenbrand <david@redhat.com>
> ---
>  fs/proc/vmcore.c | 10 ++--------
>  1 file changed, 2 insertions(+), 8 deletions(-)
> 
> diff --git a/fs/proc/vmcore.c b/fs/proc/vmcore.c
> index 30a3b66f475a..948691cf4a1a 100644
> --- a/fs/proc/vmcore.c
> +++ b/fs/proc/vmcore.c
> @@ -65,8 +65,6 @@ static size_t vmcoredd_orig_sz;
>  static DECLARE_RWSEM(vmcore_cb_rwsem);
>  /* List of registered vmcore callbacks. */
>  static LIST_HEAD(vmcore_cb_list);
> -/* Whether we had a surprise unregistration of a callback. */
> -static bool vmcore_cb_unstable;
>  /* Whether the vmcore has been opened once. */
>  static bool vmcore_opened;
>  
> @@ -94,10 +92,8 @@ void unregister_vmcore_cb(struct vmcore_cb *cb)
>  	 * very unusual (e.g., forced driver removal), but we cannot stop
>  	 * unregistering.
>  	 */
> -	if (vmcore_opened) {
> +	if (vmcore_opened)
>  		pr_warn_once("Unexpected vmcore callback unregistration\n");
> -		vmcore_cb_unstable = true;
> -	}
>  	up_write(&vmcore_cb_rwsem);
>  }
>  EXPORT_SYMBOL_GPL(unregister_vmcore_cb);
> @@ -108,8 +104,6 @@ static bool pfn_is_ram(unsigned long pfn)
>  	bool ret = true;
>  
>  	lockdep_assert_held_read(&vmcore_cb_rwsem);
> -	if (unlikely(vmcore_cb_unstable))
> -		return false;
>  
>  	list_for_each_entry(cb, &vmcore_cb_list, next) {
>  		if (unlikely(!cb->pfn_is_ram))
> @@ -577,7 +571,7 @@ static int vmcore_remap_oldmem_pfn(struct vm_area_struct *vma,
>  	 * looping over all pages without a reason.
>  	 */
>  	down_read(&vmcore_cb_rwsem);
> -	if (!list_empty(&vmcore_cb_list) || vmcore_cb_unstable)
> +	if (!list_empty(&vmcore_cb_list))
>  		ret = remap_oldmem_pfn_checked(vma, from, pfn, size, prot);
>  	else
>  		ret = remap_oldmem_pfn_range(vma, from, pfn, size, prot);
> 
> base-commit: debe436e77c72fcee804fb867f275e6d31aa999c
> -- 
> 2.31.1
>
David Hildenbrand Nov. 12, 2021, 8:28 a.m. UTC | #2
On 12.11.21 04:30, Baoquan He wrote:
> On 11/11/21 at 08:22pm, David Hildenbrand wrote:
>> In commit cc5f2704c934 ("proc/vmcore: convert oldmem_pfn_is_ram callback
>> to more generic vmcore callbacks"), we added detection of surprise
>> vmcore_cb unregistration after the vmcore was already opened. Once
>> detected, we warn the user and simulate reading zeroes from that point on
>> when accessing the vmcore.
>>
>> The basic reason was that unexpected unregistration, for example, by
>> manually unbinding a driver from a device after opening the
>> vmcore, is not supported and could result in reading oldmem the
>> vmcore_cb would have actually prohibited while registered. However,
>> something like that can similarly be trigger by a user that's really
>> looking for trouble simply by unbinding the relevant driver before opening
>> the vmcore -- or by disallowing loading the driver in the first place.
>> So it's actually of limited help.
> 
> Yes, this is the change what I would like to see in the original patch
> "proc/vmcore: convert oldmem_pfn_is_ram callback to more generic vmcore callbacks".
> I am happy with this patch appended to commit cc5f2704c934.

Good, thanks!

> 
>>
>> Currently, unregistration can only be triggered via virtio-mem when
>> manually unbinding the driver from the device inside the VM; there is no
>> way to trigger it from the hypervisor, as hypervisors don't allow for
>> unplugging virtio-mem devices -- ripping out system RAM from a VM without
>> coordination with the guest is usually not a good idea.
>>
>> The important part is that unbinding the driver and unregistering the
>> vmcore_cb while concurrently reading the vmcore won't crash the system,
>> and that is handled by the rwsem.
>>
>> To make the mechanism more future proof, let's remove the "read zero"
>> part, but leave the warning in place. For example, we could have a future
>> driver (like virtio-balloon) that will contact the hypervisor to figure out
>> if we already populated a page for a given PFN. Hotunplugging such a device
>> and consequently unregistering the vmcore_cb could be triggered from the
>> hypervisor without harming the system even while kdump is running. In that
> 
> I am a little confused, could you tell more about "contact the hypervisor to
> figure out if we already populated a page for a given PFN."? I think
> vmcore dumping relies on the eflcorehdr which is created beforehand, and
> relies on vmcore_cb registered in advance too, virtio-balloon could take
> another way to interact with kdump to make sure the dumpable ram?

This is essentially what the XEN callback does: check if a PFN is
actually populated in the hypervisor; if not, avoid reading it so we
won't be faulting+populating a fresh/zero page in the hypervisor just to
be able to dump it in the guest. But in the XEN world we usually simply
rely on straight hypercalls, not glued to actual devices that can get
hot(un)plugged.

Once you have some device that performs such checks instead that could
get hotunplugged and unregister the vmcore_cb (and virtio-balloon is
just one example), you would be able to trigger this.

As we're dealing with a moving target (hypervisor will populate pages as
necessary once the old kernel accesses them), there isn't really a way
to adjust this in the old kernel -- where we build the eflcorehdr. We
could try to adjust the elfcorehdr in the new kernel, but that certainly
opens up another can of worms.

But again, this is just an example to back the "future proof" claim
because Dave was explicitly concerned about this situation.
Baoquan He Nov. 12, 2021, 8:50 a.m. UTC | #3
On 11/12/21 at 09:28am, David Hildenbrand wrote:
> On 12.11.21 04:30, Baoquan He wrote:
> > On 11/11/21 at 08:22pm, David Hildenbrand wrote:
> >> In commit cc5f2704c934 ("proc/vmcore: convert oldmem_pfn_is_ram callback
> >> to more generic vmcore callbacks"), we added detection of surprise
> >> vmcore_cb unregistration after the vmcore was already opened. Once
> >> detected, we warn the user and simulate reading zeroes from that point on
> >> when accessing the vmcore.
> >>
> >> The basic reason was that unexpected unregistration, for example, by
> >> manually unbinding a driver from a device after opening the
> >> vmcore, is not supported and could result in reading oldmem the
> >> vmcore_cb would have actually prohibited while registered. However,
> >> something like that can similarly be trigger by a user that's really
> >> looking for trouble simply by unbinding the relevant driver before opening
> >> the vmcore -- or by disallowing loading the driver in the first place.
> >> So it's actually of limited help.
> > 
> > Yes, this is the change what I would like to see in the original patch
> > "proc/vmcore: convert oldmem_pfn_is_ram callback to more generic vmcore callbacks".
> > I am happy with this patch appended to commit cc5f2704c934.
> 
> Good, thanks!
> 
> > 
> >>
> >> Currently, unregistration can only be triggered via virtio-mem when
> >> manually unbinding the driver from the device inside the VM; there is no
> >> way to trigger it from the hypervisor, as hypervisors don't allow for
> >> unplugging virtio-mem devices -- ripping out system RAM from a VM without
> >> coordination with the guest is usually not a good idea.
> >>
> >> The important part is that unbinding the driver and unregistering the
> >> vmcore_cb while concurrently reading the vmcore won't crash the system,
> >> and that is handled by the rwsem.
> >>
> >> To make the mechanism more future proof, let's remove the "read zero"
> >> part, but leave the warning in place. For example, we could have a future
> >> driver (like virtio-balloon) that will contact the hypervisor to figure out
> >> if we already populated a page for a given PFN. Hotunplugging such a device
> >> and consequently unregistering the vmcore_cb could be triggered from the
> >> hypervisor without harming the system even while kdump is running. In that
> > 
> > I am a little confused, could you tell more about "contact the hypervisor to
> > figure out if we already populated a page for a given PFN."? I think
> > vmcore dumping relies on the eflcorehdr which is created beforehand, and
> > relies on vmcore_cb registered in advance too, virtio-balloon could take
> > another way to interact with kdump to make sure the dumpable ram?
> 
> This is essentially what the XEN callback does: check if a PFN is
> actually populated in the hypervisor; if not, avoid reading it so we
> won't be faulting+populating a fresh/zero page in the hypervisor just to
> be able to dump it in the guest. But in the XEN world we usually simply
> rely on straight hypercalls, not glued to actual devices that can get
> hot(un)plugged.
> 
> Once you have some device that performs such checks instead that could
> get hotunplugged and unregister the vmcore_cb (and virtio-balloon is
> just one example), you would be able to trigger this.
> 
> As we're dealing with a moving target (hypervisor will populate pages as
> necessary once the old kernel accesses them), there isn't really a way
> to adjust this in the old kernel -- where we build the eflcorehdr. We
> could try to adjust the elfcorehdr in the new kernel, but that certainly
> opens up another can of worms.

Sounds a little magic, but should be do-able if want to. Thanks a lot
for these details.

> 
> But again, this is just an example to back the "future proof" claim
> because Dave was explicitly concerned about this situation.
> 
> -- 
> Thanks,
> 
> David / dhildenb
>
diff mbox series

Patch

diff --git a/fs/proc/vmcore.c b/fs/proc/vmcore.c
index 30a3b66f475a..948691cf4a1a 100644
--- a/fs/proc/vmcore.c
+++ b/fs/proc/vmcore.c
@@ -65,8 +65,6 @@  static size_t vmcoredd_orig_sz;
 static DECLARE_RWSEM(vmcore_cb_rwsem);
 /* List of registered vmcore callbacks. */
 static LIST_HEAD(vmcore_cb_list);
-/* Whether we had a surprise unregistration of a callback. */
-static bool vmcore_cb_unstable;
 /* Whether the vmcore has been opened once. */
 static bool vmcore_opened;
 
@@ -94,10 +92,8 @@  void unregister_vmcore_cb(struct vmcore_cb *cb)
 	 * very unusual (e.g., forced driver removal), but we cannot stop
 	 * unregistering.
 	 */
-	if (vmcore_opened) {
+	if (vmcore_opened)
 		pr_warn_once("Unexpected vmcore callback unregistration\n");
-		vmcore_cb_unstable = true;
-	}
 	up_write(&vmcore_cb_rwsem);
 }
 EXPORT_SYMBOL_GPL(unregister_vmcore_cb);
@@ -108,8 +104,6 @@  static bool pfn_is_ram(unsigned long pfn)
 	bool ret = true;
 
 	lockdep_assert_held_read(&vmcore_cb_rwsem);
-	if (unlikely(vmcore_cb_unstable))
-		return false;
 
 	list_for_each_entry(cb, &vmcore_cb_list, next) {
 		if (unlikely(!cb->pfn_is_ram))
@@ -577,7 +571,7 @@  static int vmcore_remap_oldmem_pfn(struct vm_area_struct *vma,
 	 * looping over all pages without a reason.
 	 */
 	down_read(&vmcore_cb_rwsem);
-	if (!list_empty(&vmcore_cb_list) || vmcore_cb_unstable)
+	if (!list_empty(&vmcore_cb_list))
 		ret = remap_oldmem_pfn_checked(vma, from, pfn, size, prot);
 	else
 		ret = remap_oldmem_pfn_range(vma, from, pfn, size, prot);