Message ID | 20230731071728.3493794-1-j.granados@samsung.com (mailing list archive) |
---|---|
State | Handled Elsewhere |
Headers | show |
On Mon, Jul 31, 2023 at 09:17:14AM +0200, Joel Granados wrote: > Why? It would be easier to read if the what went before the why. > This is a preparation patch set that will make it easier for us to apply > subsequent patches that will remove the sentinel element (last empty element) > in the ctl_table arrays. > > In itself, it does not remove any sentinels but it is needed to bring all the > advantages of the removal to fruition which is to help reduce the overall build > time size of the kernel and run time memory bloat by about ~64 bytes per > sentinel. s/sentinel/declared ctl array Because the you're suggesting we want to remove the sentinel but we want to help the patch reviewer know that a sentil is required per declared ctl array. You can also mention here briefly that this helps ensure that future moves of sysctl arrays out from kernel/sysctl.c to their own subsystem won't penalize in enlarging the kernel build size or run time memory consumption. Thanks for spinning this up again! Luis
> Joel Granados (14): > sysctl: Prefer ctl_table_header in proc_sysctl > sysctl: Use ctl_table_header in list_for_each_table_entry > sysctl: Add ctl_table_size to ctl_table_header > sysctl: Add size argument to init_header > sysctl: Add a size arg to __register_sysctl_table > sysctl: Add size to register_sysctl > sysctl: Add size arg to __register_sysctl_init This is looking great thanks, I've taken the first 7 patches above to sysctl-next to get more exposure / testing and since we're already on rc4. Since the below patches involve more networking I'll wait to get more feedback from networking folks before merging them. > sysctl: Add size to register_net_sysctl function > ax.25: Update to register_net_sysctl_sz > netfilter: Update to register_net_sysctl_sz > networking: Update to register_net_sysctl_sz > vrf: Update to register_net_sysctl_sz > sysctl: SIZE_MAX->ARRAY_SIZE in register_net_sysctl > sysctl: Use ctl_table_size as stopping criteria for list macro Luis
On Mon, Jul 31, 2023 at 02:36:50PM -0700, Luis Chamberlain wrote: > > Joel Granados (14): > > sysctl: Prefer ctl_table_header in proc_sysctl > > sysctl: Use ctl_table_header in list_for_each_table_entry > > sysctl: Add ctl_table_size to ctl_table_header > > sysctl: Add size argument to init_header > > sysctl: Add a size arg to __register_sysctl_table > > sysctl: Add size to register_sysctl > > sysctl: Add size arg to __register_sysctl_init > > This is looking great thanks, I've taken the first 7 patches above > to sysctl-next to get more exposure / testing and since we're already > on rc4. Thx for the feedback. > > Since the below patches involve more networking I'll wait to get > more feedback from networking folks before merging them. Just FYI, these are all networking except for the last one. That one is actually just in sysctl and will set everything up for removing the sentinels. > > > sysctl: Add size to register_net_sysctl function > > ax.25: Update to register_net_sysctl_sz > > netfilter: Update to register_net_sysctl_sz > > networking: Update to register_net_sysctl_sz > > vrf: Update to register_net_sysctl_sz > > sysctl: SIZE_MAX->ARRAY_SIZE in register_net_sysctl > > sysctl: Use ctl_table_size as stopping criteria for list macro > > Luis
On Mon, Jul 31, 2023 at 01:50:40PM -0700, Luis Chamberlain wrote: > On Mon, Jul 31, 2023 at 09:17:14AM +0200, Joel Granados wrote: > > Why? > > It would be easier to read if the what went before the why. haha. I totally misunderstood you in lore.kernel.org/all/ZMFizKFkVxUFtSqa@bombadil.infradead.org I thought you meant to put the why first. I'll switch it back to having the what first for V3 > > > This is a preparation patch set that will make it easier for us to apply > > subsequent patches that will remove the sentinel element (last empty element) > > in the ctl_table arrays. > > > > In itself, it does not remove any sentinels but it is needed to bring all the > > advantages of the removal to fruition which is to help reduce the overall build > > time size of the kernel and run time memory bloat by about ~64 bytes per > > sentinel. > > s/sentinel/declared ctl array > > Because the you're suggesting we want to remove the sentinel but we > want to help the patch reviewer know that a sentil is required per > declared ctl array. Ack > > You can also mention here briefly that this helps ensure that future moves of > sysctl arrays out from kernel/sysctl.c to their own subsystem won't > penalize in enlarging the kernel build size or run time memory consumption. I worked it in Thx for the review > > Thanks for spinning this up again! > > Luis
On Mon, Jul 31, 2023 at 02:36:50PM -0700, Luis Chamberlain wrote: > > Joel Granados (14): > > sysctl: Prefer ctl_table_header in proc_sysctl > > sysctl: Use ctl_table_header in list_for_each_table_entry > > sysctl: Add ctl_table_size to ctl_table_header > > sysctl: Add size argument to init_header > > sysctl: Add a size arg to __register_sysctl_table > > sysctl: Add size to register_sysctl > > sysctl: Add size arg to __register_sysctl_init > > This is looking great thanks, I've taken the first 7 patches above > to sysctl-next to get more exposure / testing and since we're already > on rc4. Ok I havent't heard much more feedback from networking folks, and since this is mostly sysctl love I've taken in the rest of these patches. Thanks to Jani Nikula for the reviews and to Greg KH for the suggestion on simplifying things. Let's see what busts in linux-next, and if anything does I can reset my tree back to only the first 7 patches. Luis
On Mon, Aug 07, 2023 at 04:00:49PM -0700, Chris Maness wrote:
> When are these likely to hit the mainline release code?
linux-next tomorrow. The first 7 patches are scheduled for mainline
as they were merged + tested without any hiccups. These last few patches
I'll wait and see. If nothing blows up on linux-next perhaps I'll
include them to Linux for mainline during the next merge window.
Luis
On Mon, 7 Aug 2023 14:44:11 -0700 Luis Chamberlain wrote: > > This is looking great thanks, I've taken the first 7 patches above > > to sysctl-next to get more exposure / testing and since we're already > > on rc4. > > Ok I havent't heard much more feedback from networking folks What did you expect to hear from us? My only comment is to merge the internal prep changes in and then send us the networking changes in the next release cycle.
On Mon, Aug 07, 2023 at 07:09:14PM -0700, Jakub Kicinski wrote: > On Mon, 7 Aug 2023 14:44:11 -0700 Luis Chamberlain wrote: > > > This is looking great thanks, I've taken the first 7 patches above > > > to sysctl-next to get more exposure / testing and since we're already > > > on rc4. > > > > Ok I havent't heard much more feedback from networking folks > > What did you expect to hear from us? Just a sanity review would be nice. > My only comment is to merge the internal prep changes in and then send > us the networking changes in the next release cycle. OK we can do that, I can soak them on linux-next from now so to be sure it gets wider testing. But review on those parts would be nice. Luis
I tried running the current mainline kernel (current Arch Linux) with simple single MUX socket (ax0) using LinFBB. I was a happy camper as it seemed to work fine at first, then the system just slowed to a crawl. I am wondering if any of these patches are addressing this behavior. No kernel panic like before, but not what I was hoping for. I have also tried sixpack, and that explodes instantly the last time I have checked. That goes all the way back to the v4 kernels. v2 is fine there. 73 de Chris KQ6UP On Mon, Aug 7, 2023 at 4:43 PM Luis Chamberlain <mcgrof@kernel.org> wrote: > > On Mon, Aug 07, 2023 at 04:00:49PM -0700, Chris Maness wrote: > > When are these likely to hit the mainline release code? > > linux-next tomorrow. The first 7 patches are scheduled for mainline > as they were merged + tested without any hiccups. These last few patches > I'll wait and see. If nothing blows up on linux-next perhaps I'll > include them to Linux for mainline during the next merge window. > > Luis
On Mon, Aug 07, 2023 at 07:50:44PM -0700, Chris Maness wrote: > I tried running the current mainline kernel (current Arch Linux) with > simple single MUX socket (ax0) using LinFBB. I was a happy camper as > it seemed to work fine at first, then the system just slowed to a > crawl. I am wondering if any of these patches are addressing this > behavior. If its a regressio no. > No kernel panic like before, but not what I was hoping for. > I have also tried sixpack, and that explodes instantly the last time I > have checked. That goes all the way back to the v4 kernels. Are you reporting a separate regression that goes all the way back to v4 kernels? > v2 is fine there. What does this mean? Luis
> > Are you reporting a separate regression that goes all the way back to v4 kernels? > I am not certain what you mean by regression. > > v2 is fine there. > > What does this mean? I have to go all the way back to kernel version 2 for the serial 6PACK driver to work. If I try to use it in Kernel version 4, 5, or 6 the kernel panics as soon as I attempt to connect to another station. > > Luis Chris KQ6UP
Hey Chris My questions inline On Mon, Aug 07, 2023 at 07:50:44PM -0700, Chris Maness wrote: > I tried running the current mainline kernel (current Arch Linux) with What kernel version are we talking about exactly here? Please notice the base that I used in the cover letter (rc2 or rc3). They should also rebase cleanly on top of v6.5-rc5. I'm not sure what the patches will do if you apply them to an older kernel. > simple single MUX socket (ax0) using LinFBB. I was a happy camper as > it seemed to work fine at first, then the system just slowed to a > crawl. I am wondering if any of these patches are addressing this > behavior. No kernel panic like before, but not what I was hoping for. So you have been experiencing kernel panics from before? What does before mean? And can you bisect to pinpoint what commit fixed it? > I have also tried sixpack, and that explodes instantly the last time I > have checked. That goes all the way back to the v4 kernels. v2 is > fine there. You mean that 6pack is broken in all the major versions up until v2? If this is the case, then it might be an issue that is not related to this patchset.I see that Andreas Koensgen <ajk@comnets.uni-bremen.de> is the maintainer of 6pack, maybe he can chime in on the issue? > > 73 de Chris KQ6UP > > On Mon, Aug 7, 2023 at 4:43 PM Luis Chamberlain <mcgrof@kernel.org> wrote: > > > > On Mon, Aug 07, 2023 at 04:00:49PM -0700, Chris Maness wrote: > > > When are these likely to hit the mainline release code? > > > > linux-next tomorrow. The first 7 patches are scheduled for mainline > > as they were merged + tested without any hiccups. These last few patches > > I'll wait and see. If nothing blows up on linux-next perhaps I'll > > include them to Linux for mainline during the next merge window. > > > > Luis > > > > -- > Thanks, > Chris Maness
On Mon, Aug 07, 2023 at 08:07:24PM -0700, Chris Maness wrote: > > > > Are you reporting a separate regression that goes all the way back to v4 kernels? > > > > I am not certain what you mean by regression. > > > > v2 is fine there. > > > > What does this mean? > > I have to go all the way back to kernel version 2 for the serial 6PACK > driver to work. If I try to use it in Kernel version 4, 5, or 6 the > kernel panics as soon as I attempt to connect to another station. Again. Have you reached out to Andreas Koensgen <ajk@comnets.uni-bremen.de>? Maybe he has more info on this. Best > > > > > Luis > > Chris KQ6UP > > > -- > Thanks, > Chris Maness
diff --git a/fs/proc/proc_sysctl.c b/fs/proc/proc_sysctl.c index 5f413bfd6271..9aa8374c0ef1 100644 --- a/fs/proc/proc_sysctl.c +++ b/fs/proc/proc_sysctl.c @@ -975,6 +975,7 @@ static struct ctl_dir *new_dir(struct ctl_table_set *set, table[0].procname = new_name; table[0].mode = S_IFDIR|S_IRUGO|S_IXUGO; init_header(&new->header, set->dir.header.root, set, node, table, 1); + printk("%ld sysctl saved mem kzalloc \n", sizeof(struct ctl_table)); return new; } @@ -1202,6 +1203,7 @@ static struct ctl_table_header *new_links(struct ctl_dir *dir, struct ctl_table_ head->ctl_table_size); links->nreg = head->ctl_table_size; + printk("%ld sysctl saved mem kzalloc \n", sizeof(struct ctl_table)); return links; }