The question of the earliest GCC compiler version to support for building the Linux kernel comes up periodically. The ideal would be for Linux to compile under all GCC versions, because you never know what kind of system someone is running. Maybe their company's security team has to approve all software upgrades for their highly sensitive devices, and GCC is low on that list. Maybe they need to save as much space as possible, and recent versions of GCC are too big. There are all sorts of reasons why someone might be stuck with old software. But, they may need the latest Linux kernel because it's the foundation of their entire product, so they're stuck trying to compile it with an old compiler.
However, Linux can't really support every single GCC version. Sometimes the GCC people and the kernel people have disagreed on the manner in which GCC should produce code. Sometimes this means that the kernel really doesn't compile well on a particular version of GCC. So, there are the occasional project wars emerging from those conflicts. The GCC people will say the compiler is doing the best thing possible, and the kernel people will say the compiler is messing up their code. Sometimes the GCC people change the behavior in a later release, but that still leaves a particular GCC version that makes bad Linux code.
So, the kernel people will decide programmatically to exclude a particular version of GCC from being used to compile the kernel. Any attempt to use that compiler on kernel code will produce an error.
But, sometimes the GCC people will add a new language feature that is so useful, the kernel will people decide to rely heavily on it in their source code. In that case, there may be a period of time where the kernel people maintain one branch of code for the newer, better compiler, and a separate, less-fast or more-complex branch of code for the older, worse compiler. In that case, the kernel people—or really Linus Torvalds—eventually may decide to stop supporting compilers older than a certain version, so they can rip out all those less-fast and more-complex branches of code.
For similar reasons, it's also just an easier maintenance task for the kernel folks to drop support for older compilers; so this is something they would always prefer to do, if possible.
But, it's a big decision, typically weighed against the estimated number of users that are unable to upgrade their compilers. Linus really does not want even one regular (that is, non-kernel-developer) user to be unable to build Linux because of this sort of decision. He's willing to let the kernel carry a lot of fairly dead and/or hard-to-maintain code in order to keep that from happening.
Recently, the issue came up when Masahiro Yamada posted a patch to update the Linux kconfig system to support a new special property of his own design. The discussion took a theoretical turn, with Ulf Magnusson musing on the best way to conceptualize kernel config options, in particular with regard to cases where the user wasn't actually able to set a particular option because of system constraints.
These musings began to set the stage for the discussion of GCC version number support. So, I'm going to describe how that context developed and then loop back to the question of GCC version support in a bit.
Kees Cook pointed out that the issue of constraints on kernel config options could have serious security implications as well. For a while now, the GCC compiler has included code to identify and protect against certain buffer overflow attacks in its compiled output. But depending on the compiler version, the kconfig options to do that may or may not be available for the user to set. As Kees put it:
The goal is to record the user's selection regardless of compiler capability....Having _AUTO provides a way to pick "best possible for this compiler", though. If a user had previously selected _STRONG but they're doing builds with an older compiler (or a misconfigured newer compiler) without support, the goal is to _fail_ to build, not silently select _REGULAR.
Kees continued, saying, "what's gained is the logic for _AUTO, and the logic to not show, say, _STRONG when it's not available in the compiler. But we must still fail to build if _STRONG was in the .config. It can't silently rewrite it to _REGULAR because the compiler support for _STRONG regressed."
Linus came in at this point, partly misunderstanding the situation. Not realizing Kees was referring to a special stackprotector script that already existed in the kernel source tree, Linus thought Kees was suggesting implementing such a script especially for this case. He yelled at Kees a bit, and said, "The whole point was to simplify Kconfig, not to make it even worse." And he added, "Don't make some stupid script for stackprotector. If the user doesn't have a gcc that supports -fstackprotector-*, then don't show the options. It matters NOT ONE WHIT whether that then means that stackprotector will be off by default later."
Kees pointed out that the script in question was not something he was trying to add, but was something that already existed in the tree. He said:
It's been there since the very beginning when Arjan added it to validate that the compiler actually produces a stack protector when you give it -fstack-protector. Older gccs broke this entirely, more recent misconfigurations (as seen with some of Arnd's local gcc builds) did similar, and there have been regressions in some versions where gcc's x86 support flipped to the global canary instead of the %gs-offset canary.
Linus smacked himself on the head, saying, "The mentioned script (and bugzilla) was from 2006, I assumed this was all historical."
Annoyed at the situation, he continued, "But if it has broken again since, I guess we need to have a silly script. Grr."
But, he maintained that his disagreement was over the issue of what to do when the compiler didn't support the stronger form of buffer overflow protection than what the user wanted. Linus said:
I also reacted to your earlier "It can't silently rewrite it to _REGULAR because the compiler support for _STRONG regressed." Because it damn well can. If the compiler doesn't support -fstack-protector-strong, we can just fall back on -fstack-protector. Silently. No extra crazy complex logic for that either.
A little while later, Linus posted again. He couldn't sit still with that script cluttering up the config system and had to investigate further. He said:
I was hoping to really just unify all the stupid compiler flag testing in just the Kconfig files and hoping we could really just use:
config CC_xyz bool option cc_option '-fwhatever-xyz'
to set them, and then build Kconfig rules from that:
config USE_xyz bool 'Some question that needs xyz' depends on CC_xyz
and have a nice simple:
ccflags-$(CONFIG_USE_xyz) += -fwhatever-xyz
in the Makefiles.
And one thought I had was "hey, if we need a script for -fstack-protector, maybe we can simply standardize on _everything_ using a script".
But doing the stats, we test about two _hundred_ different compiler options, and it really looks like -fstack-protector is the _only_ one that uses a dedicated script. Everything else is just using the "see if the compiler accepts the flag". So no, we wouldn't want to standardize around a script.
We do have a script for some other build options related to gcc breakage, but not command line flags per se: both "asm goto" and for gcc version generation. And gcc plugin compatibility checking.
Oh well. It looks like we really have to have those nasty exceptions from the normal rules.
And Kees agreed, saying:
Yeah, I was really disappointed to discover the broken gcc case Arnd had while I was testing the new ..._AUTO option. I thought I was going to be able to throw away a whole bunch of the complexity too. :( And this was on top of the recent discussion about raising the minimum gcc level to a place where there wasn't any need for the "old broken gcc" stack-protectors checks. But, no, that would have been too easy. :(
And now at last, let's loop back to the question of supported GCC versions.
Nearby in the discussion, Ulf had asked how common it was to see a version of GCC that was broken in this way. And Kees replied:
I *thought* it was rare (i.e. gcc 4.2) but while working on ..._AUTO I found breakage in akpm's 4.4 gcc, and all of Arnd's gccs due to some very strange misconfiguration between the gcc build environment and other options. So, it turns out this is unfortunately common. The good news is that it does NOT appear to happen with most distro compilers, though I've seen Android's compiler regress the global vs %gs at least once about a year ago.
However, Linus had an entirely different view of the significance of the cases Kees had uncovered. He said, "Hmm. Ok, so it's not *that* common, and won't affect normal people." In other words, almost all regular users use pre-packaged distributions, and nearly all of those are okay, so the situation as a whole is okay.
Linus continued:
That actually sounds like we could just
(a) make gcc 4.5 be the minimum required version
(b) actually error out if we find a bad compiler.
Upgrading the minimum required gcc version to 4.5 is pretty much going to happen _anyway_, because we're starting to rely on "asm goto" for avoiding speculation.
End result: maybe we can make the configuration phase just use the standard "does gcc support this flag" logic, and then just have a separate script that is run to validate that gcc doesn't generate garbage, and error out loudly if it does.
To which Kees replied gleefully, "I love bumping minimum for so many reasons more than just stack protector. :)"
Linus also expanded on his post, saying:
Just to explain why that's different from what we do now (apart from the "error out" thing to actually actively discourage broken compilers): it simplifies things if we can just add a trivial check as part of the build process rather than as part of the configuration.
If we don't have to make it part of the configuration, we can use all the normal Kconfig rules and build rules, and then we can add the error check to any number of places where we already do various object file munging.
For example, it would be pretty trivial to do the "check that there's the right segment access" as part of link-vmlinux.sh or something.
And that would allow us to just use all the regular config infrastructure, including the (hopefully by 4.17) "cc-option" thing at Kconfig time.
And, replying to Kees' comment about loving to bump the compiler version, Linus said:
It's still not a very *big* bump. With modern distros being at 7.3, and people testing pre-releases of gcc-8, something like gcc-4.5 is still pretty darn ancient.
But it would be good to be able to rely on asm goto rather than have completely different logic for "have to do it by hand". And I do wonder how many of our "let's test if gcc supports this option" are completely out-dated.
And in <linux/compiler-gcc.h> we still have tests for truly ancient garbage.
And he added, "What I would worry about primarily is not one of the odd developers who can upgrade, but random people in the wild. I don't want to lose the occasional odd tester that does things nobody else does. But with gcc-4.5 being 7+ years old, I can't imagine it's a huge issue."
Linus also gave some additional insight into the version support question:
It's worth noting that our _documentation_ may claim that gcc-3.2 is the minimum supported version, but Arnd pointed out a few months ago that apparently nothing older than 4.1 has actually worked for a longish while, and gcc-4.3 was needed on several architectures.
So the _real_ jump in required gcc version would be from 4.1 (4.3 in many cases) to 4.5, not from our documented "3.2 minimum".
Arnd claimed that some architectures needed even newer-than-4.3, but I assume that's limited to things like RISC-V that simply don't have old gcc support at all.
That was from a discussion about a bug report that only happened with gcc-4.4, and was because gcc-4.4 did insane things, so we were talking about how it wasn't necessarily worth supporting.
So we really have had a lot of unrelated reasons why just saying "gcc-4.5 or newer" would be a good thing.
Regarding Linus' guess regarding RISC-V chips, Arnd Bergmann replied:
Right. Also architecture specific features may need something more recent, and in some cases like the "initializer for anonymous union needs extra curly braces", a trivial change would make it work, but a lot of architectures have obviously never been built with toolchains old enough to actually run into those cases.
Geert is the only person I know that actively uses gcc-4.1, and he actually sent some patches that seem to get additional architectures to build on that version, when they were previously on gcc-4.3+.
gcc-4.3 in turn is used by default on SLES11, which is still in support, and I've even worked with someone who used that compiler to build new kernels, since that was what happened to be installed on his shared build server. In this case, having gcc-4.3 actively refused to force him to use a new compiler would have saved us some debugging trouble.
In my tests last year, I identified gcc-4.6 as a nice minimum level, IIRC gcc-4.5 was unable to build some of the newer ARM targets.
Kees agreed with Arnd, saying, "if Linus wants 4.5 over 4.3, I would agree with Arnd: let's take it to 4.6 instead."
And Linus replied:
So it sounds like Arnd knows what the distros have.
Because I think that would actually be the best way to try to determine where we want to go, because it's what is going to determine what is most problematic for _users_.
If no distro is on 4.5, then there's no reason to pick that. The reason I mentioned 4.5 is because that's the "asm goto" point, afaik, and that is likely to be a requirement in the near future.
If SLES11 is 4.3, that's obviously a concern. Although Arnd seemed to imply that that had already caused problems.
But Geert Uytterhoeven remarked, "as long as gcc-4.1 helps me finding real bugs (which it did for the current merge window), I plan to keep on using it."
Peter Zijlstra was johnny-on-the-spot with some patches to start freeing up the Linux code from having to support older compilers. He quickly wrote and posted some of those patches, and said, "So the unofficial plan was to enforce asm-goto and -fentry support by hard failure to build, which would get us at gcc-4.6 and then remove all the fallback cruft needed for those features -- for x86. If we want to do this tree wide, that's obviously OK with me too."
Responding to Linus' query about which distributions might require GCC 4.5, Arnd said:
Looking at old distros with long support cycles:
Red Hat has either gcc-4.1 (EL5, extended support ends 2020) or gcc-4.4 (EL6, regular support ends 2020, extended support ends 2024): https://access.redhat.com/solutions/19458.
EL7 uses gcc-4.8 and will be supported until 2024 (no extended support planned).
SUSE has gcc-4.3 (SLES11, extended support ends 2022) or gcc-4.8 (SLES12, support ends 2024, extended support ends 2027): https://www.suse.com/lifecycle.
Debian Jessie (oldstable) comes with gcc-4.8 and is supported until June 2018, extended support until 2020.
Debian Wheezy (oldoldstable) uses gcc-4.6 or 4.7 depending on the architecture, extended support ends May 2018.
Ubuntu 14.04 is supported until 2019 and uses gcc-4.8.
The latest Android SDK provides (known broken) versions of gcc-4.8 and gcc-4.9 as well as clang.
OpenWRT 14.07 Barrier Breaker uses gcc-4.8, 12.07 Attitude Adjustment 12.09 used gcc-4.6, but it's very unlikely that anyone cares about building new kernels with either.
Most embedded distros just build everything from source and are used to adapting to new requirements.
From that list above, it sounds like going all the way to gcc-4.8 would be a better candidate than 4.5 or 4.6, if we decide that 4.3 and 4.4 are both no longer desirable to support.
At this point, the discussion of minimum GCC version numbers came to a halt, and the conversation focused on specific kernel features related to the original patch posted by Masahiro.
It's fascinating to watch the ins and outs of a discussion like this. We can see the glee of Kees (the security guy) at the prospect of no longer having to worry about security issues with ancient compilers. We can see Linus' indifference to inconveniencing kernel developers, but careful regard for users "in the wild" who might be doing things no one else does, and who thus might submit bug reports no one else would find. We see the care with which Arnd examined the available distributions to see what their true requirements are—and thus, the requirements of regular users. And we can see the encouragement something like the "asm goto" feature presents to the Linux developers to discard compilers that don't support that feature. In the discussion, Linus had no trouble saying that GCC 4.5 (the first to offer the "asm goto" feature) was bound to be the new minimum, just by virtue of offering that feature. Like Kees, the prospect of uprooting a lot of undesirable legacy kernel code is highly tempting to Linus. And we can see how each of these developers offered up his own new idea of the best minimum, going all the way to GCC 4.8, based on each new piece of information.
Dave Hansen from Intel posted a patch and said, "Intel is considering adding a new bit to the IA32_ARCH_CAPABILITIES MSR (Model-Specific Register) to tell when RSB (Return Stack Buffer) underflow might be happening. Feedback on this would be greatly appreciated before the specification is finalized." He explained that RSB:
...is a microarchitectural structure that attempts to help predict the branch target of RET instructions. It is implemented as a stack that is pushed on CALL and popped on RET. Being a stack, it can become empty. On some processors, an empty condition leads to use of the other indirect branch predictors which have been targeted by Spectre variant 2 (branch target injection) exploits.
The new MSR bit, Dave explained, would tell the CPU not to rely on data from the RSB if the RSB was already empty.
Linus Torvalds replied:
Yes, please. It would be lovely to not have any "this model" kind of checks.
Of course, your patch still doesn't allow for "we claim to be skylake for various other independent reasons, but the RSB issue is fixed".
So it might actually be even better with _two_ bits: "explicitly needs RSB stuffing" and "explicitly fixed and does _not_ need RSB stuffing".
And then if neither bit it set, we fall back to the implicit "we know Skylake needs it".
If both bits are set, we just go with a "CPU is batshit schitzo" message, and assume it needs RSB stuffing just because it's obviously broken.
On second thought, however, Linus withdrew his initial criticism of Dave's patch, regarding claiming to be skylake for nonRSB reasons. In a subsequent email Linus said, "maybe nobody ever has a reason to do that, though?" He went on to say:
Virtualization people may simply want the user to specify the model, but then make the Spectre decisions be based on actual hardware capabilities (whether those are "current" or "some minimum base"). Two bits allow that. One bit means "if you claim you're running skylake, we'll always have to stuff, whether you _really_ are or not".
Arjan van de Ven agreed it was extremely unlikely that anyone would claim to be skylake unless it was to take advantage of the RSB issue.
That was it for the discussion, but it's very cool that Intel is consulting with the kernel people about these sorts of hardware decisions. It's an indication of good transparency and an attempt to avoid the fallout of making a bad technical decision that would incur further ire from the kernel developers.
Dietmar Eggemann posted a patch from Quentin Perret to take advantage of energy-efficient CPUs on asymmetric multiprocessor (AMP) systems. AMP is distinguished from SMP (symmetric multiprocessor) systems in that an SMP system uses several instances of only one type of CPU, while an AMP system might use CPUs of differing speeds, feature sets and so on.
Quentin's patch was an effort to take advantage of differences in power consumption between the CPUs on an AMP system. It attempted to identify the most efficient CPU that was not already saturated with processes and assign newly awakened processes to it. If no CPUs fit the bill, standard SMP-type methods of processor assignment would be used instead.
Dietmar explained, "The selection of the most energy-efficient CPU for a task is achieved by estimating the impact on system-level active energy resulting from the placement of the task on each candidate CPU. The best CPU energy-wise is then selected if it saves a large enough amount of energy with respect to prev_cpu."
He acknowledged that this algorithm was a brute-force approach that could work well only on systems with a relatively small number of CPUs. He said, "This patch is an attempt to do something useful, as writing a fast heuristic that performs reasonably well on a broad spectrum of architectures isn't an easy task."
Patrick Bellasi and Joel Fernandes had no serious objections to the patch and offered some technical suggestions. The discussion delved into various technical issues and specific ways of addressing them, with no one raising any controversial issues.
This is the type of situation with a patch where it might look like a lack of opposition could let it sail into the kernel tree, but really, it just hasn't been thoroughly examined by Linux bigwigs yet. Once the various contributors have gotten the patch as good as they can get it without deeper feedback, they'll probably send it up the ladder for inclusion in the main source tree. At that point, the security folks will jump all over it, looking for ways that a malicious user might force processes all onto only one particular CPU (essentially mounting a denial-of-service attack) or some such thing. Even if the patch survives that scrutiny, one of the other big-time kernel people, or even Linus Torvalds, could reject the patch on the grounds that it should represent a solution for large-scale systems as well as small.
Either way, something like Dietmar and Quentin's patch will be desirable in the kernel, because it's always good to take advantages of the full range of abilities of a system. And nowadays, a lot of devices are coming out with asymmetric CPUs and other quirks that never were part of earlier general-purpose systems. So, there's definitely a lot to be gained in seeing this sort of patch go into the tree.
Mickaël Salaün posted a patch to improve communication between landlocked processes. Landlock is a security module that creates an isolated "sandbox" where a process is prevented from interacting with the rest of the system, even if that process itself is compromised by a hostile attacker. The ultimate goal is to allow regular user processes to isolate themselves in this way, reducing the likelihood that they could be an entry point for an attack against the system.
Mickaël's patch, which didn't get very far in the review process, aimed specifically at allowing landlocked processes to use system calls to manipulate other processes. To do that, he wanted to force the landlocked process to obey any constraints that also might apply to the target process. For example, the target process may not allow other processes to trace its execution. In that case, the landlocked process should be prevented from doing so.
Andy Lutomirski looked at the patch and offered some technical suggestions, but on further reflection, he felt Mickaël's approach was too complicated. He felt it was possible that the patch itself was simply unnecessary, but that if it did have a value, it simply should prevent any landlocked process from tracing another process' execution. Andy pointed to certain kernel features that would make the whole issue a lot more problematic. He said, "If something like Tycho's notifiers goes in, then it's not obvious that, just because you have the same set of filters, you have the same privilege. Similarly, if a feature that lets a filter query its cgroup goes in (and you proposed this once!), then the logic you implemented here is wrong."
Andy's overall assessment of landlock was, "I take this as further evidence that Landlock makes much more sense as part of seccomp than as a totally separate thing. We've very carefully reviewed these things for seccomp. Please don't make us do it again from scratch."
But Mickaël felt that landlock did have some valid use cases Andy hadn't mentioned—for example, "running a container constrained with some Landlock programs". Without his patch, Mickaël felt it would be impossible for users in that situation to debug their work. As he put it, "This patch adds the minimal protections which are needed to have a meaningful Landlock security policy. Without it, they may be easily bypassable, hence useless."
And as for folding landlock into seccomp, Mickaël replied, "Landlock is more complex than seccomp, because of its different goal. seccomp is less restrictive because it is more simple."
Andy replied to Mickaël's example of running a container constrained with Landlocked programs, saying, "Any sane container trying to use Landlock like this would also create a PID namespace. Problem solved." He added, "It sounds like you want Landlock to be a complete container system all by itself. I disagree with that design goal." And, he said he still felt the patch should simply be dropped.
But apparently, after delving further into the code, Andy felt his criticism was not quite right. He still felt the patch should be dropped, but he had refined his reason why. He said:
I can see an argument for having a flag that one can set when adding a seccomp filter that says "prevent ptrace of any child that doesn't have this exact stack installed", but I think that could be added later and should not be part of an initial submission. For now, Landlock users can block ptrace() entirely or use PID namespaces.
But, Mickaël said there were other use cases beyond his container example. He said:
...another is build-in sandboxing (e.g. for web browser) and another one is for sandbox managers (e.g. Firejail, Bubblewrap, Flatpack). In some of these use cases, especially from a developer point of view, you may want/need to debug your applications (without requiring to be root). For nested Landlock access-controls (e.g. container + user session + web browser), it may not be allowed to create a PID namespace, but you still want to have a meaningful access-control.
But, Andy was not convinced. He argued even more strongly that "If there's a real use case for adding this type of automatic ptrace protection, then by all means, let's add it as a general seccomp feature."
Mickaël agreed that these features also would make sense for seccomp, but he still felt his own patch was useful aside from that. And at that point, the discussion came to an end.
This sort of security debate is often really tough to follow or predict. Developers never know when someone from a seemingly distant part of the kernel is going to hone in on their patch, saying either that it creates a security hole somewhere else, or that the feature itself really belongs somewhere else. Many hours of work can be lost, just because developers didn't know that the thing they were working on would be more acceptable in a completely different part of the kernel. Likewise, the person criticizing their patch may have missed a crucial detail, and suddenly after many email messages, it turns out the original coder was doing the right thing after all—and lo and behold, there are plenty of uses for that code. It's impossible to guess which way the debate will turn, which is one reason kernel developers often will push very hard to get their point across, even to the point of seeming intractable on a given issue.
Note: if you're mentioned in this article and want to send a response, please send a message with your response text to ljeditor@linuxjournal.com and we'll run it in the next Letters section and post it on the website as an addendum to the original article.