Message ID | 20250116185458.3272683-1-rrendec@redhat.com (mailing list archive) |
---|---|
Headers | show |
Series | arm64: cacheinfo: Avoid out-of-bounds write when DT info is incorrect | expand |
On Thu, Jan 16, 2025 at 01:54:57PM -0500, Radu Rendec wrote: > Hi all, > > I found an out-of-bounds write bug in the arm64 implementation of > populate_cache_leaves() and I'm trying to fix it. The reason why this > is an RFC is that I'm not entirely sure these changes are sufficient. > > The problem is described in detail in the patch itself, so I won't > repeat it here. The gist of it is that on arm64 boards where the device > tree describes the cache structure, the number of cache leaves that > comes out of fetch_cache_info() and is used to size the cacheinfo array > can be smaller than the actual number of leaves. In that case, > populate_cache_leaves() writes past the cacheinfo array bounds. > > The way I fixed it, the code doesn't change too much and doesn't look > ugly but it's still possible to write past the array bounds if the last > populated level is a separate data/instruction cache. But I'm not sure > if this is a real-world scenario, so this is one of the areas where I'm > looking for feedback. For example, the DT may define a single unified > level (so levels=1, leaves=1) but in fact L1 is a split D/I cache. Or > the DT defines multiple levels, the last one being a unified cache, but > in reality the last level is a split D/I cache. I believe the latter > scenario is very unlikely since typically only L1 is a split D/I cache. I think it is a safe assumption there is not a split cache below a unified cache. The DT binding doesn't support that either as 'next-level-cache' is a single phandle (though I guess it could be extended to allow multiple phandles). > The other thing that doesn't look right is that init_of_cache_level() > bumps the level at each iteration to whatever the node corresponding to > that level has in the `cache-level` property, but that way one or more > levels can be skipped without accounting any leaves for them. This is > exactly what happens in my case (the Renesas R-Car S4 board, described > in arch/arm64/boot/dts/renesas/r8a779f0.dtsi). In this case, even if > populate_cache_leaves() is fixed, the cache information in the kernel > will be incomplete. Shouldn't init_of_cache_level() return -EINVAL also > when a level is skipped altogether? > > Last, and also related to the previous question, is a device tree > definition that skips a cache level correct? Or, in other words, should > the definition in r8a779f0.dtsi be fixed? The answer depends on whether there's L2 caches in in the cores or not. They are optional in the A55. Calling the L3 level 2 when there's no L2 would be confusing. So if there's not an L2, I think the description is correct (though perhaps missing L1 cache sizes if that's not otherwise discoverable). If there are L2 caches, then there should be a cache node under each cpu node. It could get even messier with heterogeneous systems where some cores have an L2 and some don't. So the cacheinfo code needs to deal skipped levels. Rob
On Fri, 2025-01-17 at 07:08 -0600, Rob Herring wrote: > On Thu, Jan 16, 2025 at 01:54:57PM -0500, Radu Rendec wrote: > > I found an out-of-bounds write bug in the arm64 implementation of > > populate_cache_leaves() and I'm trying to fix it. The reason why this > > is an RFC is that I'm not entirely sure these changes are sufficient. > > > > The problem is described in detail in the patch itself, so I won't > > repeat it here. The gist of it is that on arm64 boards where the device > > tree describes the cache structure, the number of cache leaves that > > comes out of fetch_cache_info() and is used to size the cacheinfo array > > can be smaller than the actual number of leaves. In that case, > > populate_cache_leaves() writes past the cacheinfo array bounds. > > > > The way I fixed it, the code doesn't change too much and doesn't look > > ugly but it's still possible to write past the array bounds if the last > > populated level is a separate data/instruction cache. But I'm not sure > > if this is a real-world scenario, so this is one of the areas where I'm > > looking for feedback. For example, the DT may define a single unified > > level (so levels=1, leaves=1) but in fact L1 is a split D/I cache. Or > > the DT defines multiple levels, the last one being a unified cache, but > > in reality the last level is a split D/I cache. I believe the latter > > scenario is very unlikely since typically only L1 is a split D/I cache. > > I think it is a safe assumption there is not a split cache below a > unified cache. The DT binding doesn't support that either as > 'next-level-cache' is a single phandle (though I guess it could be > extended to allow multiple phandles). If I'm reading the code correctly, even though 'next-level-cache' is a single phandle, the node it points to can still have 'i-cache-size' and 'd-cache-size' properties, which would make it a split cache - at least as far as of_count_cache_leaves() is concerned. But I'm more worried about the scenario where the DT defines only L1 and defines it as a unified cache but CLIDR_EL1 reports it's a split cache. In that case, populate_cache_leaves() would still write past the array bounds, even with my proposed changes. I'm starting to think it's much safer to just check the size correctly instead of relying on assumptions about what cache layout is (un)likely. > > The other thing that doesn't look right is that init_of_cache_level() > > bumps the level at each iteration to whatever the node corresponding to > > that level has in the `cache-level` property, but that way one or more > > levels can be skipped without accounting any leaves for them. This is > > exactly what happens in my case (the Renesas R-Car S4 board, described > > in arch/arm64/boot/dts/renesas/r8a779f0.dtsi). In this case, even if > > populate_cache_leaves() is fixed, the cache information in the kernel > > will be incomplete. Shouldn't init_of_cache_level() return -EINVAL also > > when a level is skipped altogether? > > > > Last, and also related to the previous question, is a device tree > > definition that skips a cache level correct? Or, in other words, should > > the definition in r8a779f0.dtsi be fixed? > > The answer depends on whether there's L2 caches in in the cores > or not. They are optional in the A55. Calling the L3 level 2 when > there's no L2 would be confusing. So if there's not an L2, I think > the description is correct (though perhaps missing L1 cache sizes if > that's not otherwise discoverable). If there are L2 caches, then there > should be a cache node under each cpu node. > > It could get even messier with heterogeneous systems where some cores > have an L2 and some don't. > > So the cacheinfo code needs to deal skipped levels. I just checked the technical documentation (I should have done that in the first place), and it's consistent with the DT. There is no L2 on that system, and this reconfirms what you just said. But this is where it gets interesting. CLIDR_EL1 is 0x82000023, so it reports L1 and L2. That means the information populate_cache_leaves() puts into the kernel data structures is not only inconsistent with the DT but also incorrect. Is CLIDR_EL1 supposed to "skip" missing levels i.e. report L2 instead of L3 if L2 doesn't exist? Thanks, Radu
On Fri, 17 Jan 2025 11:59:36 -0500 Radu Rendec <rrendec@redhat.com> wrote: > On Fri, 2025-01-17 at 07:08 -0600, Rob Herring wrote: > > On Thu, Jan 16, 2025 at 01:54:57PM -0500, Radu Rendec wrote: > > > I found an out-of-bounds write bug in the arm64 implementation of > > > populate_cache_leaves() and I'm trying to fix it. The reason why > > > this is an RFC is that I'm not entirely sure these changes are > > > sufficient. > > > > > > The problem is described in detail in the patch itself, so I won't > > > repeat it here. The gist of it is that on arm64 boards where the > > > device tree describes the cache structure, the number of cache > > > leaves that comes out of fetch_cache_info() and is used to size > > > the cacheinfo array can be smaller than the actual number of > > > leaves. In that case, populate_cache_leaves() writes past the > > > cacheinfo array bounds. > > > > > > The way I fixed it, the code doesn't change too much and doesn't > > > look ugly but it's still possible to write past the array bounds > > > if the last populated level is a separate data/instruction cache. > > > But I'm not sure if this is a real-world scenario, so this is one > > > of the areas where I'm looking for feedback. For example, the DT > > > may define a single unified level (so levels=1, leaves=1) but in > > > fact L1 is a split D/I cache. Or the DT defines multiple levels, > > > the last one being a unified cache, but in reality the last level > > > is a split D/I cache. I believe the latter scenario is very > > > unlikely since typically only L1 is a split D/I cache. > > > > I think it is a safe assumption there is not a split cache below a > > unified cache. The DT binding doesn't support that either as > > 'next-level-cache' is a single phandle (though I guess it could be > > extended to allow multiple phandles). > > If I'm reading the code correctly, even though 'next-level-cache' is a > single phandle, the node it points to can still have 'i-cache-size' > and 'd-cache-size' properties, which would make it a split cache - at > least as far as of_count_cache_leaves() is concerned. > > But I'm more worried about the scenario where the DT defines only L1 > and defines it as a unified cache but CLIDR_EL1 reports it's a split > cache. In that case, populate_cache_leaves() would still write past > the array bounds, even with my proposed changes. > > I'm starting to think it's much safer to just check the size correctly > instead of relying on assumptions about what cache layout is > (un)likely. > > > > The other thing that doesn't look right is that > > > init_of_cache_level() bumps the level at each iteration to > > > whatever the node corresponding to that level has in the > > > `cache-level` property, but that way one or more levels can be > > > skipped without accounting any leaves for them. This is exactly > > > what happens in my case (the Renesas R-Car S4 board, described in > > > arch/arm64/boot/dts/renesas/r8a779f0.dtsi). In this case, even if > > > populate_cache_leaves() is fixed, the cache information in the > > > kernel will be incomplete. Shouldn't init_of_cache_level() return > > > -EINVAL also when a level is skipped altogether? > > > > > > Last, and also related to the previous question, is a device tree > > > definition that skips a cache level correct? Or, in other words, > > > should the definition in r8a779f0.dtsi be fixed? > > > > The answer depends on whether there's L2 caches in in the cores > > or not. They are optional in the A55. Calling the L3 level 2 when > > there's no L2 would be confusing. So if there's not an L2, I think > > the description is correct (though perhaps missing L1 cache sizes > > if that's not otherwise discoverable). If there are L2 caches, then > > there should be a cache node under each cpu node. > > > > It could get even messier with heterogeneous systems where some > > cores have an L2 and some don't. > > > > So the cacheinfo code needs to deal skipped levels. > > I just checked the technical documentation (I should have done that in > the first place), and it's consistent with the DT. There is no L2 on > that system, and this reconfirms what you just said. > > But this is where it gets interesting. CLIDR_EL1 is 0x82000023, so it > reports L1 and L2. That means the information populate_cache_leaves() > puts into the kernel data structures is not only inconsistent with the > DT but also incorrect. > > Is CLIDR_EL1 supposed to "skip" missing levels i.e. report L2 instead > of L3 if L2 doesn't exist? > > Thanks, > Radu > > Hi Radu, I believe this discussion is heading in the right direction, especially when examining the code. Currently, data is sometimes retrieved from the device tree and other times from the system’s registers. While there are cross-checks in place, it can still be confusing and error-prone. One scenario that comes to mind involves QEMU, where not all system registers—particularly those related to the cache CLIDR_EL1—are implemented in TCG “CORRECTLY”. Additionally, at this point, QEMU does not generate any cache descriptions in the device tree, which is something I am actively working on [1]. This patch-set will likely require the kernel to handle various cases and address these inconsistencies effectively. I also found it challenging to understand how the number of leaves is calculated and utilized in the code. While I wasn’t specifically looking for bugs, my focus was on identifying the best approach to share caches between SMTs. There are some existing discussions and trials already [2,3]. But in the new one that I will share soon, I allow adding another layer l1-cache referenced by next-level-cache at each CPU node. This also will reduce the complexity when counting leaves instead of just looking for different properties in the CPU node. This might allow for easier testing and experimentation of more complex scenarios. It also would be good to know what are the thoughts on this one too. 1) https://mail.gnu.org/archive/html/qemu-arm/2025-01/msg00014.html 2) https://lore.kernel.org/linux-devicetree/CAL_JsqLGEvGBQ0W_B6+5cME1UEhuKXadBB-6=GoN1tmavw9K_w@mail.gmail.com/ 3) https://lore.kernel.org/linux-arm-kernel/20250113115849.00006fee@huawei.com/T/#m1fc077e393ca2a785e67d818a67c20e1032ca31e
Hi Alireza, On Tue, 2025-01-21 at 09:20 +0000, Alireza Sanaee wrote: > On Fri, 17 Jan 2025 11:59:36 -0500 > Radu Rendec <rrendec@redhat.com> wrote: > > > On Fri, 2025-01-17 at 07:08 -0600, Rob Herring wrote: > > > On Thu, Jan 16, 2025 at 01:54:57PM -0500, Radu Rendec wrote: > > > > I found an out-of-bounds write bug in the arm64 implementation of > > > > populate_cache_leaves() and I'm trying to fix it. The reason why > > > > this is an RFC is that I'm not entirely sure these changes are > > > > sufficient. > > > > > > > > The problem is described in detail in the patch itself, so I won't > > > > repeat it here. The gist of it is that on arm64 boards where the > > > > device tree describes the cache structure, the number of cache > > > > leaves that comes out of fetch_cache_info() and is used to size > > > > the cacheinfo array can be smaller than the actual number of > > > > leaves. In that case, populate_cache_leaves() writes past the > > > > cacheinfo array bounds. > > > > > > > > The way I fixed it, the code doesn't change too much and doesn't > > > > look ugly but it's still possible to write past the array bounds > > > > if the last populated level is a separate data/instruction cache. > > > > But I'm not sure if this is a real-world scenario, so this is one > > > > of the areas where I'm looking for feedback. For example, the DT > > > > may define a single unified level (so levels=1, leaves=1) but in > > > > fact L1 is a split D/I cache. Or the DT defines multiple levels, > > > > the last one being a unified cache, but in reality the last level > > > > is a split D/I cache. I believe the latter scenario is very > > > > unlikely since typically only L1 is a split D/I cache. > > > > > > I think it is a safe assumption there is not a split cache below a > > > unified cache. The DT binding doesn't support that either as > > > 'next-level-cache' is a single phandle (though I guess it could be > > > extended to allow multiple phandles). > > > > If I'm reading the code correctly, even though 'next-level-cache' is a > > single phandle, the node it points to can still have 'i-cache-size' > > and 'd-cache-size' properties, which would make it a split cache - at > > least as far as of_count_cache_leaves() is concerned. > > > > But I'm more worried about the scenario where the DT defines only L1 > > and defines it as a unified cache but CLIDR_EL1 reports it's a split > > cache. In that case, populate_cache_leaves() would still write past > > the array bounds, even with my proposed changes. > > > > I'm starting to think it's much safer to just check the size correctly > > instead of relying on assumptions about what cache layout is > > (un)likely. > > > > > > The other thing that doesn't look right is that > > > > init_of_cache_level() bumps the level at each iteration to > > > > whatever the node corresponding to that level has in the > > > > `cache-level` property, but that way one or more levels can be > > > > skipped without accounting any leaves for them. This is exactly > > > > what happens in my case (the Renesas R-Car S4 board, described in > > > > arch/arm64/boot/dts/renesas/r8a779f0.dtsi). In this case, even if > > > > populate_cache_leaves() is fixed, the cache information in the > > > > kernel will be incomplete. Shouldn't init_of_cache_level() return > > > > -EINVAL also when a level is skipped altogether? > > > > > > > > Last, and also related to the previous question, is a device tree > > > > definition that skips a cache level correct? Or, in other words, > > > > should the definition in r8a779f0.dtsi be fixed? > > > > > > The answer depends on whether there's L2 caches in in the cores > > > or not. They are optional in the A55. Calling the L3 level 2 when > > > there's no L2 would be confusing. So if there's not an L2, I think > > > the description is correct (though perhaps missing L1 cache sizes > > > if that's not otherwise discoverable). If there are L2 caches, then > > > there should be a cache node under each cpu node. > > > > > > It could get even messier with heterogeneous systems where some > > > cores have an L2 and some don't. > > > > > > So the cacheinfo code needs to deal skipped levels. > > > > I just checked the technical documentation (I should have done that in > > the first place), and it's consistent with the DT. There is no L2 on > > that system, and this reconfirms what you just said. > > > > But this is where it gets interesting. CLIDR_EL1 is 0x82000023, so it > > reports L1 and L2. That means the information populate_cache_leaves() > > puts into the kernel data structures is not only inconsistent with the > > DT but also incorrect. > > > > Is CLIDR_EL1 supposed to "skip" missing levels i.e. report L2 instead > > of L3 if L2 doesn't exist? > > > > Thanks, > > Radu > > > > > > Hi Radu, > > I believe this discussion is heading in the right direction, especially > when examining the code. Currently, data is sometimes retrieved from > the device tree and other times from the system’s registers. While > there are cross-checks in place, it can still be confusing and > error-prone. Agreed. And the more I look at it, the more I think that information should not be pulled from different sources because they *can* be inconsistent. For arm64 specifically, the information that's extracted from CLIDR_EL1 does not seem to include more details than what the device tree can provide. Essentially it's just the level and type of cache (instruction, data, or unified). The same information can be extracted from the device tree, and looking at init_of_cache_level() it already is but it's used just to count the levels/leaves. For that reason, I would argue that for arm64, if the device tree (or ACPI) provides information about the cache, it should be used as a single source of truth, and CLIDR_EL1 should not be used. Even if the device tree (or ACPI) description is incomplete, we still can't extract information about more levels/leaves from CLIDR_EL1 because the number of leaves is pre-determined from DT/ACPI and populate_cache_leaves() will stop at that number of leaves. Or at least it should (that's the bug I'm trying to fix). > One scenario that comes to mind involves QEMU, where not all system > registers—particularly those related to the cache CLIDR_EL1—are > implemented in TCG “CORRECTLY”. Additionally, at this point, QEMU does > not generate any cache descriptions in the device tree, which is > something I am actively working on [1]. This patch-set will likely > require the kernel to handle various cases and address these > inconsistencies effectively. > > I also found it challenging to understand how the number of leaves is > calculated and utilized in the code. While I wasn’t specifically > looking for bugs, my focus was on identifying the best approach to > share caches between SMTs. There are some existing discussions and > trials already [2,3]. But in the new one that I will share soon, I allow > adding another layer l1-cache referenced by next-level-cache at each > CPU node. This also will reduce the complexity when counting leaves > instead of just looking for different properties in the CPU node. This > might allow for easier testing and experimentation of more complex > scenarios. It also would be good to know what are the thoughts on this > one too. I looked at the links you provided and I can understand what you're trying to do there. I believe this is an orthogonal problem though because struct cpu_cacheinfo is a per-cpu data structure. In the discussions you had, it was pointed out that physical CPU threads need to be modeled as distinct CPU nodes in the kernel, which means they will also have distinct cpu_cacheinfo structures. Even if the cache is physically shared, you will still have a separate struct cpu_cacheinfo for each of them in the kernel, regardless of where the information extracted from. > 1) https://mail.gnu.org/archive/html/qemu-arm/2025-01/msg00014.html > 2) https://lore.kernel.org/linux-devicetree/CAL_JsqLGEvGBQ0W_B6+5cME1UEhuKXadBB-6=GoN1tmavw9K_w@mail.gmail.com/ > 3) https://lore.kernel.org/linux-arm-kernel/20250113115849.00006fee@huawei.com/T/#m1fc077e393ca2a785e67d818a67c20e1032ca31e