Message ID | 20221011113511.1.I1cf52674cd85d07b300fe3fff3ad6ce830304bb6@changeid (mailing list archive) |
---|---|
State | New, archived |
Headers | show |
Series | pstore/ram: Ensure stable pmsg address with per-CPU ftrace buffers | expand |
On Tue, Oct 11, 2022 at 11:36:31AM -0700, pso@chromium.org wrote: > From: Paramjit Oberoi <psoberoi@google.com> > > When allocating ftrace pstore zones, there may be space left over at the > end of the region. The paddr pointer needs to be advanced to account for > this so that the next region (pmsg) ends up at the correct location. > > Signed-off-by: Paramjit Oberoi <pso@chromium.org> > Reviewed-by: Dmitry Torokhov <dtor@chromium.org> > Signed-off-by: Paramjit Oberoi <psoberoi@google.com> Hm, interesting point. Since only ftrace is dynamically sized in this fashion, how about just moving the pmsg allocation before ftrace, and adding a comment that for now ftrace should be allocated last? i.e. something like: diff --git a/fs/pstore/ram.c b/fs/pstore/ram.c index 650f89c8ae36..9e11d3e7dffe 100644 --- a/fs/pstore/ram.c +++ b/fs/pstore/ram.c @@ -788,6 +788,11 @@ static int ramoops_probe(struct platform_device *pdev) if (err) goto fail_init; + err = ramoops_init_prz("pmsg", dev, cxt, &cxt->mprz, &paddr, + cxt->pmsg_size, 0); + if (err) + goto fail_init; + cxt->max_ftrace_cnt = (cxt->flags & RAMOOPS_FLAG_FTRACE_PER_CPU) ? nr_cpu_ids : 1; @@ -799,11 +804,6 @@ static int ramoops_probe(struct platform_device *pdev) if (err) goto fail_init; - err = ramoops_init_prz("pmsg", dev, cxt, &cxt->mprz, &paddr, - cxt->pmsg_size, 0); - if (err) - goto fail_init; - cxt->pstore.data = cxt; /* * Prepare frontend flags based on which areas are initialized. (Note that this won't apply to the current tree, where I've started some other refactoring.)
On Tue, Oct 11, 2022 at 12:59:50PM -0700, Paramjit Oberoi wrote: > > Hm, interesting point. Since only ftrace is dynamically sized in this > > fashion, how about just moving the pmsg allocation before ftrace, and > > adding a comment that for now ftrace should be allocated last? > > That is a good idea, and it would solve the problem. > > The only downside is it would break some code that works today because it > ran in contexts where the pmsg address was stable (no per-cpu ftrace > buffers, or power-of-two CPUs). I don't follow? And actually, I wonder about the original patch now -- nothing should care about the actual addresses. Everything should be coming out of the pstore filesystem.
On Tue, Oct 11, 2022 at 01:44:54PM -0700, Paramjit Oberoi wrote: > > > The only downside is it would break some code that works today because > it > > > ran in contexts where the pmsg address was stable (no per-cpu ftrace > > > buffers, or power-of-two CPUs). > > > > I don't follow? And actually, I wonder about the original patch now -- > > nothing should care about the actual addresses. Everything should be > > coming out of the pstore filesystem. > > We are running VMs with the pstore RAM mapped to a file, and using some > tools outside the VM to read/manipulate the pstore after VM shutdown. Ah-ha! Interesting. Well, I think it will be more stable this way even for that. :)
diff --git a/fs/pstore/ram.c b/fs/pstore/ram.c index fefe3d391d3af..3bca6cd34c02a 100644 --- a/fs/pstore/ram.c +++ b/fs/pstore/ram.c @@ -554,10 +554,12 @@ static int ramoops_init_przs(const char *name, goto fail; } *paddr += zone_sz; + mem_sz -= zone_sz; prz_ar[i]->type = pstore_name_to_type(name); } *przs = prz_ar; + *paddr += mem_sz; return 0; fail: