Patchwork [v5,1/3] livepatch: add (un)patch callbacks

login
register
mail settings
Submitter Joe Lawrence
Date Sept. 13, 2017, 1:53 p.m.
Message ID <6ed9c3ac-ac8f-8678-a884-754cdd07b447@redhat.com>
Download mbox | patch
Permalink /patch/9951435/
State New
Headers show

Comments

Joe Lawrence - Sept. 13, 2017, 1:53 p.m.
On 09/13/2017 03:29 AM, Miroslav Benes wrote:
> On Tue, 12 Sep 2017, Josh Poimboeuf wrote:
> 
>> On Tue, Sep 12, 2017 at 06:05:44PM -0400, Joe Lawrence wrote:
>>> On Tue, Sep 12, 2017 at 11:48:48AM -0400, Joe Lawrence wrote:
>>> I've re-read this a few times, and I think I might have been originally
>>> off-base with what I thought you were concerned about.  But I think I
>>> grok it now: the problem you pointed out arises because
>>> klp_module_coming() iterates like so:
>>>
>>>   for each klp_patch
>>>     for each kobj in klp_patch
>>>
>>> which means that we may have made pre-patch callbacks and patched a
>>> given kobj for an earlier klp_patch that now fails for a later
>>> klp_patch.
> 
> Yes, that's the scenario.
>  
>>> What should be the defined behavior in this case?  I would expect that
>>> we need to unpatch all similar kobjs across klp_patches which have
>>> already been successfully patched.  In turn, their post-unpatch
>>> callbacks should be invoked.
>>>
>>> If that's true, maybe this would make a better follow-on patch.
> 
> Yes, you'd need to loop back, unpatch everything and call post-unpatch 
> callbacks too. Probably too much for this patch set, so we can deal with 
> the problem later.

Completely untested/compiled, but something like (sans callbacks)?


>> The rabbit hole seems to be getting deeper, is it really worth it?  I'd
>> rather we just make the pre-patch handler return void and be done with
>> it, as Joe originally proposed.
>>
>> So far, allowing the pre-patch handler to halt patching is a purely
>> theoretical feature, nobody even knows if we need it yet, and whether
>> it's worth the pain.  So I'd vote to just simplify this mess and let
>> whoever wants the feature try to implement it :-)

"Rabbit hole" is an apt description :) the question is whether this is
the last hurdle, or just one of many more coming our way.  I'd like to
think the former, but I'm the guy down in the rabbit hole, so my
perspective is tainted.

Ripping this out of the code would be relatively easy.  Re-doing the
tests/documentation/comments will be a bit of work to reword everything.

> Unfortunately, the problem is there even without Joe's callbacks. If it 
> was only a problem of callbacks, I'd go along with you. I see two options.
> 
> 1. we'll fix this for klp_patch_object(). Then callbacks' problem would be 
> simple to solve, because the infrastructure would be already there.

If the bugfix is like the above, then it's not too bad a diversion.  I
can run some tests this afternoon to try and tackle this.

> 2. we'll remove any error handling from klp_coming_module and we'll allow 
> target modules to load even with a patching failure. This doesn't seem to 
> be the right approach...
I agree.

-- Joe

Patch

--- a/kernel/livepatch/core.c
+++ b/kernel/livepatch/core.c
@@ -894,6 +894,29 @@  int klp_module_coming(struct module *mod)
 	pr_warn("patch '%s' failed for module '%s', refusing to load module
'%s'\n",
 		patch->mod->name, obj->mod->name, obj->mod->name);
+
+	/*
+	 * Run back through the patch list and unpatch any klp_object
+	 * that matches the one we errored on above.
+	 */
+	list_for_each_entry(patch, &klp_patches, list) {
+
+		if (!patch->enabled || patch == klp_transition_patch)
+			continue;
+
+		klp_for_each_object(patch, obj) {
+
+			if (!obj->patched ||
+			    !klp_is_module(obj) ||
+			    strcmp(obj->name, mod->name))
+				continue;
+
+			klp_unpatch_object(obj);
+			/* post-unpatch callback would go here */
+
+			break;
+		}
+	}
+
 	mod->klp_alive = false;
 	klp_free_object_loaded(obj);
 	mutex_unlock(&klp_mutex);