Message ID | 20210501151538.145449-1-masahiroy@kernel.org (mailing list archive) |
---|---|
State | New, archived |
Headers | show |
Series | Raise the minimum GCC version to 5.2 | expand |
On Sat, May 1, 2021 at 5:17 PM Masahiro Yamada <masahiroy@kernel.org> wrote: > > More cleanups will be possible as follow-up patches, but this one must > be agreed and applied to the mainline first. +1 This will allow me to remove the __has_attribute hack in include/linux/compiler_attributes.h. Reviewed-by: Miguel Ojeda <ojeda@kernel.org> Cheers, Miguel
Le 01/05/2021 à 17:52, Miguel Ojeda a écrit : > On Sat, May 1, 2021 at 5:17 PM Masahiro Yamada <masahiroy@kernel.org> wrote: >> >> More cleanups will be possible as follow-up patches, but this one must >> be agreed and applied to the mainline first. > > +1 This will allow me to remove the __has_attribute hack in > include/linux/compiler_attributes.h. > > Reviewed-by: Miguel Ojeda <ojeda@kernel.org> > On powerpc this will allow us to remove commit https://github.com/linuxppc/linux/commit/592bbe9c505d Reviewed-by: Christophe Leroy <christophe.leroy@csgroup.eu> Christophe
On Sat, 2021-05-01 at 17:52 +0200, Miguel Ojeda wrote: > On Sat, May 1, 2021 at 5:17 PM Masahiro Yamada <masahiroy@kernel.org> wrote: > > > > More cleanups will be possible as follow-up patches, but this one must > > be agreed and applied to the mainline first. > > +1 This will allow me to remove the __has_attribute hack in > include/linux/compiler_attributes.h. Why not raise the minimum gcc compiler version even higher? https://gcc.gnu.org/releases.html
On Sat, May 01, 2021 at 07:41:53PM -0700, Joe Perches wrote:
> Why not raise the minimum gcc compiler version even higher?
The latest GCC 5 release is only three and a half years old. Do you
really want to require bleeding edge tools?
Segher
On Sun, May 02, 2021 at 12:15:38AM +0900, Masahiro Yamada wrote: > The current minimum GCC version is 4.9 except ARCH=arm64 requiring > GCC 5.1. > > When we discussed last time, we agreed to raise the minimum GCC version > to 5.1 globally. [1] > > I'd like to propose GCC 5.2 to clean up arch/powerpc/Kconfig as well. Both of these are GCC version 5. GCC 5.1 is the first release of that, GCC 5.2 the second, etc. Everyone should always use an as new release as practical, since many bugs will be fixed, and nothing else changed. See <https://gcc.gnu.org/develop.html#num_scheme>. So, this means everyone using GCC 5 should be using the GCC 5.5 release! If there is something about 5.1 that makes it produce bad kernels on some arch, make that arch's Makefile complain? Same with binutils etc. Segher
On Sun, 2021-05-02 at 13:30 -0500, Segher Boessenkool wrote: > On Sat, May 01, 2021 at 07:41:53PM -0700, Joe Perches wrote: > > Why not raise the minimum gcc compiler version even higher? On Sun, 2021-05-02 at 13:37 -0500, Segher Boessenkool wrote: > Everyone should always use an as new release as practical [] > The latest GCC 5 release is only three and a half years old. You argue slightly against yourself here. Yes, it's mostly a question of practicality vs latest. clang requires a _very_ recent version. gcc _could_ require a later version. Perhaps 8 might be best as that has a __diag warning control mechanism. gcc 8.1 is now 3 years old today.
On Sun, May 02, 2021 at 01:00:28PM -0700, Joe Perches wrote: > On Sun, 2021-05-02 at 13:30 -0500, Segher Boessenkool wrote: > > On Sat, May 01, 2021 at 07:41:53PM -0700, Joe Perches wrote: > > > Why not raise the minimum gcc compiler version even higher? > > On Sun, 2021-05-02 at 13:37 -0500, Segher Boessenkool wrote: > > Everyone should always use an as new release as practical > > [] > > > The latest GCC 5 release is only three and a half years old. > > You argue slightly against yourself here. I don't? > Yes, it's mostly a question of practicality vs latest. > > clang requires a _very_ recent version. > gcc _could_ require a later version. > Perhaps 8 might be best as that has a __diag warning control mechanism. I have no idea what you mean? > gcc 8.1 is now 3 years old today. And there will be a new GCC 8 release very soon now! The point is, you inconvenience users if you require a compiler version they do not already have. Five years might be fine, but three years is not. Segher
On 02/05/2021 23.32, Segher Boessenkool wrote: > On Sun, May 02, 2021 at 01:00:28PM -0700, Joe Perches wrote: >> On Sun, 2021-05-02 at 13:30 -0500, Segher Boessenkool wrote: >>> On Sat, May 01, 2021 at 07:41:53PM -0700, Joe Perches wrote: >>>> Why not raise the minimum gcc compiler version even higher? >> On Sun, 2021-05-02 at 13:37 -0500, Segher Boessenkool wrote: >>> Everyone should always use an as new release as practical >> [] >> >>> The latest GCC 5 release is only three and a half years old. >> You argue slightly against yourself here. > I don't? > >> Yes, it's mostly a question of practicality vs latest. >> >> clang requires a _very_ recent version. >> gcc _could_ require a later version. >> Perhaps 8 might be best as that has a __diag warning control mechanism. > I have no idea what you mean? > >> gcc 8.1 is now 3 years old today. > And there will be a new GCC 8 release very soon now! > > The point is, you inconvenience users if you require a compiler version > they do not already have. Five years might be fine, but three years is > not. > > > Segher Users & especially devs should upgrade then. 3 years of not updating your compiler - if you regularly build the kernel - seems nonsensical. Ali
On Sun, May 2, 2021 at 1:38 PM Segher Boessenkool <segher@kernel.crashing.org> wrote: > > The point is, you inconvenience users if you require a compiler version > they do not already have. Five years might be fine, but three years is > not. So this should be our main issue - not how old a compiler is, but how our compiler version limitations end up possibly making it harder for users to upgrade. Of course, one issue there is whether said users would have upgraded regardless - if you have a very old distribution, how likely are you to upgrade the kernel at all? But we do very much want to encourage people to upgrade their kernels, even if they might be running otherwise fairly old user space. If for no other reason than that it's good for our kernel coverage testing - the more different distributions people test a new kernel with, the better. And some of the less common architectures have their own issues, with distros possibly not even supporting them any more (if they ever did - considering all the odd ad-hoc cross-compiler builds people have had..) This is why "our clang support requires a very recent version of clang" is not relevant - distributions won't have old versions of clang anyway, and even if they do, such distributions will be gcc-based, so "build the kernel with clang" for that situation is perhaps an exercise for some intrepid person who is willing to do odd and unusual things, and might as well build their own clang version too. So I really wish people didn't get hung about some "three years ago" or similar. It's not relevant. What is relevant is what version of gcc various distributions actually have reasonably easily available, and how old and relevant the distributions are. We did decide that (just as an example) RHEL 7 was too old to worry about when we updated the gcc version requirement last time. Last year, Arnd and Kirill (maybe others were involved too) made a list of distros and older gcc versions. But I don't think anybody actually _maintains_ such a list. It would be perhaps interesting to have some way to check what compiler versions are being offered by different distros. Linus
On Sun, 2021-05-02 at 15:32 -0500, Segher Boessenkool wrote: > On Sun, May 02, 2021 at 01:00:28PM -0700, Joe Perches wrote: [] > > Perhaps 8 might be best as that has a __diag warning control mechanism. > > I have no idea what you mean? ? read the last bit of compiler-gcc.h
On Sun, May 02, 2021 at 02:08:31PM -0700, Linus Torvalds wrote: > What is relevant is what version of gcc various distributions actually > have reasonably easily available, and how old and relevant the > distributions are. We did decide that (just as an example) RHEL 7 was > too old to worry about when we updated the gcc version requirement > last time. > > Last year, Arnd and Kirill (maybe others were involved too) made a > list of distros and older gcc versions. But I don't think anybody > actually _maintains_ such a list. It would be perhaps interesting to > have some way to check what compiler versions are being offered by > different distros. fwiw, Debian 9 aka Stretch released June 2017 had gcc 6.3 Debian 10 aka Buster released June 2019 had gcc 7.4 *and* 8.3. Debian 8 aka Jessie had gcc-4.8.4 and gcc-4.9.2. So do we care about people who haven't bothered to upgrade userspace since 2017? If so, we can't go past 4.9.
On Sun, May 02, 2021 at 02:23:01PM -0700, Joe Perches wrote: > On Sun, 2021-05-02 at 15:32 -0500, Segher Boessenkool wrote: > > On Sun, May 02, 2021 at 01:00:28PM -0700, Joe Perches wrote: > [] > > > Perhaps 8 might be best as that has a __diag warning control mechanism. > > > > I have no idea what you mean? > > ? read the last bit of compiler-gcc.h Ah, you mean #pragma GCC diagnostic (which has existed since GCC 4.2). Does anything in this __diag stuff require GCC 8? Other than that this is hardcoded here :-) Segher
Le 01/05/2021 à 17:15, Masahiro Yamada a écrit : > The current minimum GCC version is 4.9 except ARCH=arm64 requiring > GCC 5.1. > > When we discussed last time, we agreed to raise the minimum GCC version > to 5.1 globally. [1] > > I'd like to propose GCC 5.2 to clean up arch/powerpc/Kconfig as well. One point I missed when I saw your patch first time, but I realised during the discussion: Up to 4.9, GCC was numbered with 3 digits, we had 4.8.0, 4.8.1, ... 4.8.5, 4.9.0, 4.9.1, .... 4.9.4 Then starting at 5, GCC switched to a 2 digits scheme, with 5.0, 5.1, 5.2, ... 5.5 So, that is not GCC 5.1 or 5.2 that you should target, but only GCC 5. Then it is up to the user to use the latest available version of GCC 5, which is 5.5 at the time begin, just like the user would have selected 4.9.4 when 4.9 was the minimum GCC version. Christophe
Hei hei, Am Sun, May 02, 2021 at 11:30:07PM +0100 schrieb Matthew Wilcox: > On Sun, May 02, 2021 at 02:08:31PM -0700, Linus Torvalds wrote: > > What is relevant is what version of gcc various distributions actually > > have reasonably easily available, and how old and relevant the > > distributions are. We did decide that (just as an example) RHEL 7 was > > too old to worry about when we updated the gcc version requirement > > last time. > > > > Last year, Arnd and Kirill (maybe others were involved too) made a > > list of distros and older gcc versions. But I don't think anybody > > actually _maintains_ such a list. It would be perhaps interesting to > > have some way to check what compiler versions are being offered by > > different distros. > > fwiw, Debian 9 aka Stretch released June 2017 had gcc 6.3 > Debian 10 aka Buster released June 2019 had gcc 7.4 *and* 8.3. > Debian 8 aka Jessie had gcc-4.8.4 and gcc-4.9.2. > > So do we care about people who haven't bothered to upgrade userspace > since 2017? If so, we can't go past 4.9. Desktops and servers are all nice, however I just want to make you aware, there are embedded users forced to stick to older cross toolchains for different reasons as well, e.g. in industrial environment. :-) This is no show stopper for us, I just wanted to let you be aware. Greets Alex
On Mon, 2021-05-03 at 09:34 +0200, Alexander Dahl wrote: > Desktops and servers are all nice, however I just want to make you > aware, there are embedded users forced to stick to older cross > toolchains for different reasons as well, e.g. in industrial > environment. :-) In your embedded case, what kernel version do you use? For older toolchains, unless it's kernel version 5.13+, it wouldn't matter. And all the supported architectures have gcc 10.3 available at http://cdn.kernel.org/pub/tools/crosstool/
On Mon, May 3, 2021 at 9:35 AM Alexander Dahl <ada@thorsis.com> wrote: > > Desktops and servers are all nice, however I just want to make you > aware, there are embedded users forced to stick to older cross > toolchains for different reasons as well, e.g. in industrial > environment. :-) > > This is no show stopper for us, I just wanted to let you be aware. Can you be more specific about what scenarios you are thinking of, what the motivations are for using an old compiler with a new kernel on embedded systems, and what you think a realistic maximum time would be between compiler updates? One scenario that I've seen previously is where user space and kernel are built together as a source based distribution (OE, buildroot, openwrt, ...), and the compiler is picked to match the original sources of the user space because that is best tested, but the same compiler then gets used to build the kernel as well because that is the default in the build environment. There are two problems I see with this logic: - Running the latest kernel to avoid security problems is of course a good idea, but if one runs that with ten year old user space that is never updated, the system is likely to end up just as insecure. Not all bugs are in the kernel. - The same logic that applies to ancient user space staying with an ancient compiler (it's better tested in this combination) also applies to the kernel: running the latest kernel on an old compiler is something that few people test, and tends to run into more bugs than using the compiler that other developers used to test that kernel. Arnd
On Sun, May 02, 2021 at 02:08:31PM -0700, Linus Torvalds wrote: > Last year, Arnd and Kirill (maybe others were involved too) made a > list of distros and older gcc versions. But I don't think anybody > actually _maintains_ such a list. Distrowatch does. I used it for checking. But you need to check it per distro. For Debian it would be here: https://distrowatch.com/table.php?distribution=debian
On Mon, May 3, 2021 at 2:44 AM Segher Boessenkool <segher@kernel.crashing.org> wrote: > > On Sun, May 02, 2021 at 02:23:01PM -0700, Joe Perches wrote: > > On Sun, 2021-05-02 at 15:32 -0500, Segher Boessenkool wrote: > > > On Sun, May 02, 2021 at 01:00:28PM -0700, Joe Perches wrote: > > [] > > > > Perhaps 8 might be best as that has a __diag warning control mechanism. > > > > > > I have no idea what you mean? > > > > ? read the last bit of compiler-gcc.h > > Ah, you mean > #pragma GCC diagnostic > (which has existed since GCC 4.2). Does anything in this __diag stuff > require GCC 8? Other than that this is hardcoded here :-) The '8' was just a kernel thing, we made it configurable to have version specific warnings, and I have a header file that adds these macros for all supported compilers, but the version that is in mainline only does it for gcc-8 or later. Early compilers only supported "#pragma GCC diagnostic", but I think even gcc-4.6 supported the _Pragma() syntax that lets you do it inside of a macro. It's something we should improve with plumbing on top, e.g. I want a macro that lets you locally turn off both -Woverride-init on gcc and -Winitializer-overrides on clang. It's not a reason to mandate a newer compiler though. Arnd
On Mon, May 3, 2021 at 12:32 AM Matthew Wilcox <willy@infradead.org> wrote: > On Sun, May 02, 2021 at 02:08:31PM -0700, Linus Torvalds wrote: > > What is relevant is what version of gcc various distributions actually > > have reasonably easily available, and how old and relevant the > > distributions are. We did decide that (just as an example) RHEL 7 was > > too old to worry about when we updated the gcc version requirement > > last time. > > > > Last year, Arnd and Kirill (maybe others were involved too) made a > > list of distros and older gcc versions. But I don't think anybody > > actually _maintains_ such a list. It would be perhaps interesting to > > have some way to check what compiler versions are being offered by > > different distros. > > fwiw, Debian 9 aka Stretch released June 2017 had gcc 6.3 > Debian 10 aka Buster released June 2019 had gcc 7.4 *and* 8.3. > Debian 8 aka Jessie had gcc-4.8.4 and gcc-4.9.2. > > So do we care about people who haven't bothered to upgrade userspace > since 2017? If so, we can't go past 4.9. I would argue that we shouldn't care about distros that are officially end-of-life. Jessie support ended last July according to the official Debian pages at https://wiki.debian.org/LTS. It's a little harder for distros that are still officially supported, like the RHEL7 case that Linus mentioned, Debian Stretch (gcc-6.3), Slackware 14.2 (gcc-5.3), or Ubuntu 18.04 (gcc-7.3). For any of these you could make the argument one way or the other: either say we care as long as the distro cares, or the users that want to build their own kernels can be reasonably expected to either upgrade their distro or install a newer compiler manually. Looking at the Debian case specifically, I see these numbers from https://popcon.debian.org/: testing/unstable: 16730 buster/stable: 113881 stretch/oldstable: 39147 jessie/oldoldstable: 19286 Assuming the numbers of users that installed popcon are proportional to the actual number of users, that's still a large chunk of people running stretch or older. Presumably, these users are actually less likely to build their own kernels. Arnd
From: Arnd Bergmann > Sent: 03 May 2021 10:25 ... > One scenario that I've seen previously is where user space and > kernel are built together as a source based distribution (OE, buildroot, > openwrt, ...), and the compiler is picked to match the original sources > of the user space because that is best tested, but the same compiler > then gets used to build the kernel as well because that is the default > in the build environment. If you are building programs for release to customers who might be running then on old distributions then you need a system with the original userspace headers and almost certainly a similar vintage compiler. Never mind RHEL7 we have customers running RHEL6. (We've managed to get everyone off RHEL5.) So the build machine is running a 10+ year old distro. I did try to build on a newer system (only 5 years old) but the complete fubar of memcpy() makes it impossible to compile C programs that will run on an older libc. And don't even mention C++, the 'character traits' is just plain horrid - enough to make me want to remove every reference to CString from the small amount of C++ we have. To quote our makefile: # C++ is fighting back. # I'd like to be able to compile on a 'new' system and still be able to run # the binaries on RHEL 6 (2.6.32 kernel 2011 era libraries). # But even linking libstdc++ static still leaves # an undefined C++ symbol that the dynamic loader barfs on. # The static libstdc++ also references memcpy@GLIBC_2.14 - but that can be # 'solved' by adding an extra .so that defines the symbol (and calls memmove()). # I've also tried pulling a single .o out of libstc++.a. This might work if # the .o is small and self contained. # # For now we statically link libstc++ and continue to build on an old system. C++LDLIBS := -Wl,-Bstatic -lstdc++ -Wl,-Bdynamic It would be nice to be able to build current kernels (for local use) on the 'new' system - but gcc is already too old. David - Registered Address Lakeside, Bramley Road, Mount Farm, Milton Keynes, MK1 1PT, UK Registration No: 1397386 (Wales)
On Sun, May 02, 2021 at 12:15:38AM +0900, Masahiro Yamada wrote: > The current minimum GCC version is 4.9 except ARCH=arm64 requiring > GCC 5.1. > > When we discussed last time, we agreed to raise the minimum GCC version > to 5.1 globally. [1] There are still a lot of comment references to old gcc releases with workarounds or bugfixes, a quick serarch: $ git grep -in 'gcc.*[234]\.x' arch/alpha/include/asm/string.h:30:/* For gcc 3.x, we cannot have the inline function named "memset" because arch/arc/include/asm/checksum.h:9: * -gcc 4.4.x broke networking. Alias analysis needed to be primed. arch/arm/Makefile:127:# Need -Uarm for gcc < 3.x arch/ia64/lib/memcpy_mck.S:535: * Due to lack of local tag support in gcc 2.x assembler, it is not clear which arch/mips/include/asm/page.h:210: * also affect MIPS so we keep this one until GCC 3.x has been retired arch/x86/include/asm/page.h:53: * remove this Voodoo magic stuff. (i.e. once gcc3.x is deprecated) arch/x86/kvm/x86.c:5569: * This union makes it completely explicit to gcc-3.x arch/x86/mm/pgtable.c:302: if (PREALLOCATED_PMDS == 0) /* Work around gcc-3.4.x bug */ drivers/net/ethernet/renesas/sh_eth.c:51: * that warning from W=1 builds. GCC has supported this option since 4.2.X, but lib/xz/xz_dec_lzma2.c:494: * of the code generated by GCC 3.x decreases 10-15 %. (GCC 4.3 doesn't care, lib/xz/xz_dec_lzma2.c:495: * and it generates 10-20 % faster code than GCC 3.x from this file anyway.) net/core/skbuff.c:32: * The functions in this file will not compile correctly with gcc 2.4.x This misses version-specific quirks, but the following returns 216 results and not all are problematic (eg. just referring to gcc for some historical reason) so I'm not pasting it here. $ git grep -in 'gcc.*[234]\.[0-9]' ...
On Mon, May 3, 2021 at 2:20 PM David Laight <David.Laight@aculab.com> wrote: > > It would be nice to be able to build current kernels (for local > use) on the 'new' system - but gcc is already too old. I have seen such environments too... However, for the kernel in particular, you could install a newer GCC in the 'new' machine (just for the kernel builds) or do your kernel builds in a different machine -- a 'new' 'new' one :) Cheers, Miguel
On Mon, May 3, 2021 at 3:17 PM Christophe Leroy <christophe.leroy@csgroup.eu> wrote: > > > > Le 01/05/2021 à 17:15, Masahiro Yamada a écrit : > > The current minimum GCC version is 4.9 except ARCH=arm64 requiring > > GCC 5.1. > > > > When we discussed last time, we agreed to raise the minimum GCC version > > to 5.1 globally. [1] > > > > I'd like to propose GCC 5.2 to clean up arch/powerpc/Kconfig as well. > > One point I missed when I saw your patch first time, but I realised during the discussion: > > Up to 4.9, GCC was numbered with 3 digits, we had 4.8.0, 4.8.1, ... 4.8.5, 4.9.0, 4.9.1, .... 4.9.4 > > Then starting at 5, GCC switched to a 2 digits scheme, with 5.0, 5.1, 5.2, ... 5.5 > > So, that is not GCC 5.1 or 5.2 that you should target, but only GCC 5. > Then it is up to the user to use the latest available version of GCC 5, which is 5.5 at the time > begin, just like the user would have selected 4.9.4 when 4.9 was the minimum GCC version. > > Christophe One line below in Documentation/process/changes.rst, I see Clang/LLVM (optional) 10.0.1 clang --version Clang 10.0.1 is a bug fix release of Clang 10 I do not think GCC 5.2 is strange when we want to exclude the initial release of GCC 5.
Hello Arnd, Am Mon, May 03, 2021 at 11:25:21AM +0200 schrieb Arnd Bergmann: > On Mon, May 3, 2021 at 9:35 AM Alexander Dahl <ada@thorsis.com> wrote: > > > > Desktops and servers are all nice, however I just want to make you > > aware, there are embedded users forced to stick to older cross > > toolchains for different reasons as well, e.g. in industrial > > environment. :-) > > > > This is no show stopper for us, I just wanted to let you be aware. > > Can you be more specific about what scenarios you are thinking of, > what the motivations are for using an old compiler with a new kernel > on embedded systems, and what you think a realistic maximum > time would be between compiler updates? One reason might be certification. For certain industrial applications like support for complex field bus protocols, you need to get your devices tested by an external partner running extensive test suites. This is time consuming and expensive. Changing the toolchain of your system then, would be a massive change which would require recertification, while you could argue just updating a single component like the kernel and building everything again, does not require the whole testing process again. Thin ice, I know. > One scenario that I've seen previously is where user space and > kernel are built together as a source based distribution (OE, buildroot, > openwrt, ...), and the compiler is picked to match the original sources > of the user space because that is best tested, but the same compiler > then gets used to build the kernel as well because that is the default > in the build environment. One problem we actually ran into in BSPs like that (we build with ptxdist, however build system doesn't matter here, it could as well have been buildroot etc.) was things* failing to build with newer compilers, things we could not or did not want to fix, so staying with an older toolchain was the obvious choice. *Things as in bootloaders for an armv5 platform. > There are two problems I see with this logic: > > - Running the latest kernel to avoid security problems is of course > a good idea, but if one runs that with ten year old user space that > is never updated, the system is likely to end up just as insecure. > Not all bugs are in the kernel. Agreed. > - The same logic that applies to ancient user space staying with > an ancient compiler (it's better tested in this combination) also > applies to the kernel: running the latest kernel on an old compiler > is something that few people test, and tends to run into more bugs > than using the compiler that other developers used to test that > kernel. What we actually did: building recent userspace and kernel with older toolchains, because bootloader. I know, there are several possibilities to solve this kind of lock: - built bootloader with different compiler - update bootloader - … As said before, this is no problem for me now, I can work around it, but to give an idea what could keep people on older toolchains. Greets Alex
Le 04/05/2021 à 07:30, Alexander Dahl a écrit : > Hello Arnd, > > Am Mon, May 03, 2021 at 11:25:21AM +0200 schrieb Arnd Bergmann: >> On Mon, May 3, 2021 at 9:35 AM Alexander Dahl <ada@thorsis.com> wrote: >>> >>> Desktops and servers are all nice, however I just want to make you >>> aware, there are embedded users forced to stick to older cross >>> toolchains for different reasons as well, e.g. in industrial >>> environment. :-) >>> >>> This is no show stopper for us, I just wanted to let you be aware. >> >> Can you be more specific about what scenarios you are thinking of, >> what the motivations are for using an old compiler with a new kernel >> on embedded systems, and what you think a realistic maximum >> time would be between compiler updates? > > One reason might be certification. For certain industrial applications > like support for complex field bus protocols, you need to get your > devices tested by an external partner running extensive test suites. > This is time consuming and expensive. > > Changing the toolchain of your system then, would be a massive change > which would require recertification, while you could argue just > updating a single component like the kernel and building everything > again, does not require the whole testing process again. Not sure to follow you. Our company provides systems for Air Trafic Control, so we have the same kind of assurance quality process, but then I can't understand why you would need to upgrade your kernel at all. Today our system is based on GCC 5 and Kernel 4.14. At the time being we are using GCC 5.5 (Latest GCC 5) and kernel 4.14.232 (Latest 4.14.y). Kernel 4.14 is maintained until 2024. The day we do an upgrade, we upgrade everything including the tool chain then we go for another 6 years without major changes/re-qualification, because we can't afford a new qualitication every now and then. So really, I can't see your approach. Christophe
On 02/05/2021 03:41, Joe Perches wrote: > On Sat, 2021-05-01 at 17:52 +0200, Miguel Ojeda wrote: >> On Sat, May 1, 2021 at 5:17 PM Masahiro Yamada <masahiroy@kernel.org> wrote: >>> >>> More cleanups will be possible as follow-up patches, but this one must >>> be agreed and applied to the mainline first. >> >> +1 This will allow me to remove the __has_attribute hack in >> include/linux/compiler_attributes.h. > > Why not raise the minimum gcc compiler version even higher? > > https://gcc.gnu.org/releases.html Some of us are a bit stuck as either customer refuses to upgrade their build infrastructure or has paid for some old but safety blessed version of gcc. These often lag years behind the recent gcc releases :(
On Tue, May 4, 2021 at 9:57 AM Ben Dooks <ben.dooks@codethink.co.uk> wrote: > > Some of us are a bit stuck as either customer refuses to upgrade > their build infrastructure or has paid for some old but safety > blessed version of gcc. These often lag years behind the recent > gcc releases :( In those scenarios, why do you need to build mainline? Aren't your customers using longterm or frozen kernels? If they are paying for certified GCC images, aren't they already paying for supported kernel images from some vendor too? I understand where you are coming from -- I have also dealt with projects/machines running ancient, unsupported software/toolchains for various reasons; but nobody expected upstream (and in particular the mainline kernel source) to support them. In the cases I experienced, those use cases require not touching anything at all, and when the time came of doing so, everything would be updated at once, re-certified/validated as needed and frozen again. Cheers, Miguel
On Tue, May 04, 2021 at 10:38:32AM +0200, Miguel Ojeda wrote: > On Tue, May 4, 2021 at 9:57 AM Ben Dooks <ben.dooks@codethink.co.uk> wrote: > > > > Some of us are a bit stuck as either customer refuses to upgrade > > their build infrastructure or has paid for some old but safety > > blessed version of gcc. These often lag years behind the recent > > gcc releases :( > > In those scenarios, why do you need to build mainline? Aren't your > customers using longterm or frozen kernels? If they are paying for > certified GCC images, aren't they already paying for supported kernel > images from some vendor too? > > I understand where you are coming from -- I have also dealt with > projects/machines running ancient, unsupported software/toolchains for > various reasons; but nobody expected upstream (and in particular the > mainline kernel source) to support them. In the cases I experienced, > those use cases require not touching anything at all, and when the > time came of doing so, everything would be updated at once, > re-certified/validated as needed and frozen again. Except it makes answering the question "Is this bug we see on this ancient system still present in upstream?" needlessly more difficult to answer. Sure, throwing out old compiler versions that are known to cause problems makes sense. Updating to latest just because much less so. One of the selling point of C in general and gcc in particular is stability. If we need the latest compiler we can as well rewrite the kernel in Rust which has a required update cycle of a few months. Because some mainline kernel features rely on bleeding edge tools I end up building mainline with current tools anyway but if you do not need BTF or whatever other latest gimmick older toolchains should do. Thanks Michal
On Tue, May 4, 2021 at 7:31 AM Alexander Dahl <ada@thorsis.com> wrote: > Am Mon, May 03, 2021 at 11:25:21AM +0200 schrieb Arnd Bergmann: > > On Mon, May 3, 2021 at 9:35 AM Alexander Dahl <ada@thorsis.com> wrote: > > > > > > Desktops and servers are all nice, however I just want to make you > > > aware, there are embedded users forced to stick to older cross > > > toolchains for different reasons as well, e.g. in industrial > > > environment. :-) > > > > > > This is no show stopper for us, I just wanted to let you be aware. > > > > Can you be more specific about what scenarios you are thinking of, > > what the motivations are for using an old compiler with a new kernel > > on embedded systems, and what you think a realistic maximum > > time would be between compiler updates? > > One reason might be certification. For certain industrial applications > like support for complex field bus protocols, you need to get your > devices tested by an external partner running extensive test suites. > This is time consuming and expensive. > > Changing the toolchain of your system then, would be a massive change > which would require recertification, while you could argue just > updating a single component like the kernel and building everything > again, does not require the whole testing process again. > > Thin ice, I know. As Christophe said, I don't think this is a valid example. I agree that if rebuilding everything with a new toolchain requires certification, you shouldn't rebuild everything. If replacing the kernel does not require recertification for your specific system, that is fine, but that does not mean the new kernel should be built with an outdated toolchain. If the certification allows replacing linux-3.18 with linux-5.10 but doesn't allow building the kernel with a different toolchain compared to the rest, then the point of the certification is rather questionable. Do you know specific certifications that would require you to do this? > One problem we actually ran into in BSPs like that (we build with > ptxdist, however build system doesn't matter here, it could as well > have been buildroot etc.) was things* failing to build with newer > compilers, things we could not or did not want to fix, so staying with > an older toolchain was the obvious choice. > > *Things as in bootloaders for an armv5 platform. ... > > What we actually did: building recent userspace and kernel with older > toolchains, because bootloader. It sounds like you are trying to make an argument in favour of deprecating old toolchains *earlier* in new kernels ;-) If we simply made it impossible to have users build kernels with the same old toolchain that is needed for building the old bootloader or the old user space, it sounds like more people would do the right thing and build the updated kernels with a better tested toolchain, or update their bootloader as well. The only downside is that some users would choose to remain on the older kernels, so it shouldn't be too aggressive either. Arnd
On Tue, May 4, 2021 at 11:22 AM Michal Suchánek <msuchanek@suse.de> wrote: > > Except it makes answering the question "Is this bug we see on this > ancient system still present in upstream?" needlessly more difficult to > answer. Can you please provide some details? If you are talking about testing a new kernel image in the ancient system "as-is", why wouldn't you build it in a newer system? If you are talking about particular problems about bisecting (kernel, compiler) pairs etc., details would also be welcome. > Sure, throwing out old compiler versions that are known to cause > problems makes sense. Updating to latest just because much less so. I definitely did not argue for "latest compiler" or "updating just because". > One of the selling point of C in general and gcc in particular is > stability. If we need the latest compiler we can as well rewrite the > kernel in Rust which has a required update cycle of a few months. Rust does not have a "required update cycle" and it does not break old code unless really required, just like C and common compilers. Concerning GCC, they patch releases for ~2.5 years, sure, but for many projects that is not nearly enough. So you still need custom support, which is anyway what most people care about. > Because some mainline kernel features rely on bleeding edge tools I end > up building mainline with current tools anyway but if you do not need > BTF or whatever other latest gimmick older toolchains should do. It would be better to hear concrete arguments about why "older toolchains should do", rather than calling things a gimmick. Cheers, Miguel
On Tue, May 04, 2021 at 02:09:24PM +0200, Miguel Ojeda wrote: > On Tue, May 4, 2021 at 11:22 AM Michal Suchánek <msuchanek@suse.de> wrote: > > > > Except it makes answering the question "Is this bug we see on this > > ancient system still present in upstream?" needlessly more difficult to > > answer. > > Can you please provide some details? If you are talking about testing > a new kernel image in the ancient system "as-is", why wouldn't you > build it in a newer system? If you are talking about particular > problems about bisecting (kernel, compiler) pairs etc., details would > also be welcome. Yes, bisecting comes to mind. If you need to switch the userspace as well the bisection results are not that solid. You may not be even able to bisect because the workload does not exist on a new system at all. Crafting a minimal test case that can be forward-ported to a new system is not always trivial - if you understood the problem to that extent you might not even need to bisect it in the first place. Thanks Michal
Le 04/05/2021 à 14:17, Michal Suchánek a écrit : > On Tue, May 04, 2021 at 02:09:24PM +0200, Miguel Ojeda wrote: >> On Tue, May 4, 2021 at 11:22 AM Michal Suchánek <msuchanek@suse.de> wrote: >>> >>> Except it makes answering the question "Is this bug we see on this >>> ancient system still present in upstream?" needlessly more difficult to >>> answer. >> >> Can you please provide some details? If you are talking about testing >> a new kernel image in the ancient system "as-is", why wouldn't you >> build it in a newer system? If you are talking about particular >> problems about bisecting (kernel, compiler) pairs etc., details would >> also be welcome. > > Yes, bisecting comes to mind. If you need to switch the userspace as > well the bisection results are not that solid. You may not be even able > to bisect because the workload does not exist on a new system at all. > Crafting a minimal test case that can be forward-ported to a new system > is not always trivial - if you understood the problem to that extent you > might not even need to bisect it in the first place. > But you don't need to switch the userspace or the complete build tools to build a kernel with a newer toolchain. All you have to do is take one from https://mirrors.edge.kernel.org/pub/tools/crosstool/ I'm doing everything under CentOS 6, and using one of those tools allows me to build latest kernel without breaking anything else.
On Mon, May 3, 2021 at 9:17 AM Christophe Leroy <christophe.leroy@csgroup.eu> wrote: > Le 01/05/2021 à 17:15, Masahiro Yamada a écrit : > > The current minimum GCC version is 4.9 except ARCH=arm64 requiring > > GCC 5.1. > > > > When we discussed last time, we agreed to raise the minimum GCC version > > to 5.1 globally. [1] > > > > I'd like to propose GCC 5.2 to clean up arch/powerpc/Kconfig as well. > > One point I missed when I saw your patch first time, but I realised during the discussion: > > Up to 4.9, GCC was numbered with 3 digits, we had 4.8.0, 4.8.1, ... 4.8.5, 4.9.0, 4.9.1, .... 4.9.4 > > Then starting at 5, GCC switched to a 2 digits scheme, with 5.0, 5.1, 5.2, ... 5.5 > > So, that is not GCC 5.1 or 5.2 that you should target, but only GCC 5. > Then it is up to the user to use the latest available version of GCC 5, which is 5.5 at the time > begin, just like the user would have selected 4.9.4 when 4.9 was the minimum GCC version. And we may end up in the case when gcc 5.x will be more buggy than v4.9.y (as once proved by nice detective story where compiler bug produces a file system corruption).
On Mon, May 3, 2021 at 12:29 PM Arnd Bergmann <arnd@arndb.de> wrote: > > On Mon, May 3, 2021 at 9:35 AM Alexander Dahl <ada@thorsis.com> wrote: > > > > Desktops and servers are all nice, however I just want to make you > > aware, there are embedded users forced to stick to older cross > > toolchains for different reasons as well, e.g. in industrial > > environment. :-) > > > > This is no show stopper for us, I just wanted to let you be aware. > > Can you be more specific about what scenarios you are thinking of, > what the motivations are for using an old compiler with a new kernel > on embedded systems, and what you think a realistic maximum > time would be between compiler updates? > > One scenario that I've seen previously is where user space and > kernel are built together as a source based distribution (OE, buildroot, > openwrt, ...), and the compiler is picked to match the original sources > of the user space because that is best tested, but the same compiler > then gets used to build the kernel as well because that is the default > in the build environment. > > There are two problems I see with this logic: > > - Running the latest kernel to avoid security problems is of course > a good idea, but if one runs that with ten year old user space that > is never updated, the system is likely to end up just as insecure. > Not all bugs are in the kernel. > > - The same logic that applies to ancient user space staying with > an ancient compiler (it's better tested in this combination) also > applies to the kernel: running the latest kernel on an old compiler > is something that few people test, and tends to run into more bugs > than using the compiler that other developers used to test that > kernel. I understand that you are talking about embedded, but it you stuck with a distro (esp. LTS one, like CentOS 7.x), you have gcc 4.8.5 there for everything, but they have got security updates. Seems if you are with a distro you have to stick with its kernel with all pros and cons of such an approach.
On Sun 2021-05-02 00:15:38, Masahiro Yamada wrote: > The current minimum GCC version is 4.9 except ARCH=arm64 requiring > GCC 5.1. Please don't. I'm still on 4.9 on machine I can't easily update, > Documentation/process/changes.rst | 2 +- > arch/arm64/Kconfig | 2 +- > arch/powerpc/Kconfig | 2 +- > arch/riscv/Kconfig | 2 +- > include/linux/compiler-gcc.h | 6 +----- > lib/Kconfig.debug | 2 +- > scripts/min-tool-version.sh | 8 +------- > 7 files changed, 7 insertions(+), 17 deletions(-) and 10 lines of cleanups is really not worth that. Best regards, Pavel
On Sat, 2021-05-15 at 09:14 +0200, Pavel Machek wrote: > On Sun 2021-05-02 00:15:38, Masahiro Yamada wrote: > > The current minimum GCC version is 4.9 except ARCH=arm64 requiring > > GCC 5.1. > > Please don't. I'm still on 4.9 on machine I can't easily update, Why is that? Later compiler versions are available. http://cdn.kernel.org/pub/tools/crosstool/ Is there some other reason your machine can not have the compiler version updated?
diff --git a/Documentation/process/changes.rst b/Documentation/process/changes.rst index dac17711dc11..cf104a8d1850 100644 --- a/Documentation/process/changes.rst +++ b/Documentation/process/changes.rst @@ -29,7 +29,7 @@ you probably needn't concern yourself with pcmciautils. ====================== =============== ======================================== Program Minimal version Command to check the version ====================== =============== ======================================== -GNU C 4.9 gcc --version +GNU C 5.2 gcc --version Clang/LLVM (optional) 10.0.1 clang --version GNU make 3.81 make --version binutils 2.23 ld -v diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig index 7f2a80091337..fae9514dabab 100644 --- a/arch/arm64/Kconfig +++ b/arch/arm64/Kconfig @@ -78,7 +78,7 @@ config ARM64 select ARCH_SUPPORTS_LTO_CLANG_THIN select ARCH_SUPPORTS_CFI_CLANG select ARCH_SUPPORTS_ATOMIC_RMW - select ARCH_SUPPORTS_INT128 if CC_HAS_INT128 && (GCC_VERSION >= 50000 || CC_IS_CLANG) + select ARCH_SUPPORTS_INT128 if CC_HAS_INT128 select ARCH_SUPPORTS_NUMA_BALANCING select ARCH_WANT_COMPAT_IPC_PARSE_VERSION if COMPAT select ARCH_WANT_DEFAULT_BPF_JIT diff --git a/arch/powerpc/Kconfig b/arch/powerpc/Kconfig index 1e6230bea09d..10dc47eac122 100644 --- a/arch/powerpc/Kconfig +++ b/arch/powerpc/Kconfig @@ -212,7 +212,7 @@ config PPC select HAVE_FUNCTION_ERROR_INJECTION select HAVE_FUNCTION_GRAPH_TRACER select HAVE_FUNCTION_TRACER - select HAVE_GCC_PLUGINS if GCC_VERSION >= 50200 # plugin support on gcc <= 5.1 is buggy on PPC + select HAVE_GCC_PLUGINS select HAVE_GENERIC_VDSO select HAVE_HW_BREAKPOINT if PERF_EVENTS && (PPC_BOOK3S || PPC_8xx) select HAVE_IDE diff --git a/arch/riscv/Kconfig b/arch/riscv/Kconfig index 4515a10c5d22..748a5b37a0e5 100644 --- a/arch/riscv/Kconfig +++ b/arch/riscv/Kconfig @@ -226,7 +226,7 @@ config ARCH_RV32I config ARCH_RV64I bool "RV64I" select 64BIT - select ARCH_SUPPORTS_INT128 if CC_HAS_INT128 && GCC_VERSION >= 50000 + select ARCH_SUPPORTS_INT128 if CC_HAS_INT128 && CC_IS_GCC select HAVE_DYNAMIC_FTRACE if MMU select HAVE_DYNAMIC_FTRACE_WITH_REGS if HAVE_DYNAMIC_FTRACE select HAVE_FTRACE_MCOUNT_RECORD diff --git a/include/linux/compiler-gcc.h b/include/linux/compiler-gcc.h index 5d97ef738a57..3608189dfc29 100644 --- a/include/linux/compiler-gcc.h +++ b/include/linux/compiler-gcc.h @@ -98,10 +98,8 @@ #if GCC_VERSION >= 70000 #define KASAN_ABI_VERSION 5 -#elif GCC_VERSION >= 50000 +#else #define KASAN_ABI_VERSION 4 -#elif GCC_VERSION >= 40902 -#define KASAN_ABI_VERSION 3 #endif #if __has_attribute(__no_sanitize_address__) @@ -122,9 +120,7 @@ #define __no_sanitize_undefined #endif -#if GCC_VERSION >= 50100 #define COMPILER_HAS_GENERIC_BUILTIN_OVERFLOW 1 -#endif /* * Turn individual warnings and errors on and off locally, depending diff --git a/lib/Kconfig.debug b/lib/Kconfig.debug index 678c13967580..0d0ed298905d 100644 --- a/lib/Kconfig.debug +++ b/lib/Kconfig.debug @@ -284,7 +284,7 @@ config DEBUG_INFO_DWARF4 config DEBUG_INFO_DWARF5 bool "Generate DWARF Version 5 debuginfo" - depends on GCC_VERSION >= 50000 || (CC_IS_CLANG && (AS_IS_LLVM || (AS_IS_GNU && AS_VERSION >= 23502))) + depends on CC_IS_GCC || (CC_IS_CLANG && (AS_IS_LLVM || (AS_IS_GNU && AS_VERSION >= 23502))) depends on !DEBUG_INFO_BTF help Generate DWARF v5 debug info. Requires binutils 2.35.2, gcc 5.0+ (gcc diff --git a/scripts/min-tool-version.sh b/scripts/min-tool-version.sh index d22cf91212b0..d5d0d26b8e7d 100755 --- a/scripts/min-tool-version.sh +++ b/scripts/min-tool-version.sh @@ -17,13 +17,7 @@ binutils) echo 2.23.0 ;; gcc) - # https://gcc.gnu.org/bugzilla/show_bug.cgi?id=63293 - # https://lore.kernel.org/r/20210107111841.GN1551@shell.armlinux.org.uk - if [ "$SRCARCH" = arm64 ]; then - echo 5.1.0 - else - echo 4.9.0 - fi + echo 5.2.0 ;; icc) # temporary
The current minimum GCC version is 4.9 except ARCH=arm64 requiring GCC 5.1. When we discussed last time, we agreed to raise the minimum GCC version to 5.1 globally. [1] I'd like to propose GCC 5.2 to clean up arch/powerpc/Kconfig as well. This commit updates the minimum versions in scripts/min-tool-version.sh and Documentation/process/changes.rst with trivial code cleanups. More cleanups will be possible as follow-up patches, but this one must be agreed and applied to the mainline first. [1]: https://lore.kernel.org/lkml/CAHk-=wjHTpG+gMx9vqrZgo8Uw0NqA2kNjS87o63Zv3=WG2K3zA@mail.gmail.com/ Signed-off-by: Masahiro Yamada <masahiroy@kernel.org> --- I'd like Linus to pick up this patch if there is no objection. Documentation/process/changes.rst | 2 +- arch/arm64/Kconfig | 2 +- arch/powerpc/Kconfig | 2 +- arch/riscv/Kconfig | 2 +- include/linux/compiler-gcc.h | 6 +----- lib/Kconfig.debug | 2 +- scripts/min-tool-version.sh | 8 +------- 7 files changed, 7 insertions(+), 17 deletions(-)