Message ID | 20200821100519.1325691-1-imammedo@redhat.com (mailing list archive) |
---|---|
State | New, archived |
Headers | show |
Series | numa: hmat: fix cache size check | expand |
On 8/21/20 12:05 PM, Igor Mammedov wrote: > when QEMU is started like: > > qemu-system-x86_64 -smp 2 -machine hmat=on \ > -m 2G \ > -object memory-backend-ram,size=1G,id=m0 \ > -object memory-backend-ram,size=1G,id=m1 \ > -numa node,nodeid=0,memdev=m0 \ > -numa node,nodeid=1,memdev=m1,initiator=0 \ > -numa cpu,node-id=0,socket-id=0 \ > -numa cpu,node-id=0,socket-id=1 \ > -numa hmat-lb,initiator=0,target=0,hierarchy=memory,data-type=access-latency,latency=5 \ > -numa hmat-lb,initiator=0,target=0,hierarchy=memory,data-type=access-bandwidth,bandwidth=200M \ > -numa hmat-lb,initiator=0,target=1,hierarchy=memory,data-type=access-latency,latency=10 \ > -numa hmat-lb,initiator=0,target=1,hierarchy=memory,data-type=access-bandwidth,bandwidth=100M \ > -numa hmat-cache,node-id=0,size=8K,level=1,associativity=direct,policy=write-back,line=5 \ > -numa hmat-cache,node-id=0,size=16K,level=2,associativity=direct,policy=write-back,line=5 > > it errors out with: > -numa hmat-cache,node-id=0,size=16K,level=2,associativity=direct,policy=write-back,line=5: > Invalid size=16384, the size of level=2 should be less than the size(8192) of level=1 > > which doesn't look right as one would expect that L1 < L2 < L3 ... > Fix it by sawpping relevant size checks. It makes sense (typo "swapping"). Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com> > > Fixes: c412a48d4d91 (numa: Extend CLI to provide memory side cache information) > Signed-off-by: Igor Mammedov <imammedo@redhat.com> > --- > CC: jingqi.liu@intel.com > > hw/core/numa.c | 8 ++++---- > 1 file changed, 4 insertions(+), 4 deletions(-) > > diff --git a/hw/core/numa.c b/hw/core/numa.c > index d1a94a14f8..f9593ec716 100644 > --- a/hw/core/numa.c > +++ b/hw/core/numa.c > @@ -425,10 +425,10 @@ void parse_numa_hmat_cache(MachineState *ms, NumaHmatCacheOptions *node, > > if ((node->level > 1) && > ms->numa_state->hmat_cache[node->node_id][node->level - 1] && > - (node->size >= > + (node->size <= > ms->numa_state->hmat_cache[node->node_id][node->level - 1]->size)) { > error_setg(errp, "Invalid size=%" PRIu64 ", the size of level=%" PRIu8 > - " should be less than the size(%" PRIu64 ") of " > + " should be larger than the size(%" PRIu64 ") of " > "level=%u", node->size, node->level, > ms->numa_state->hmat_cache[node->node_id] > [node->level - 1]->size, > @@ -438,10 +438,10 @@ void parse_numa_hmat_cache(MachineState *ms, NumaHmatCacheOptions *node, > > if ((node->level < HMAT_LB_LEVELS - 1) && > ms->numa_state->hmat_cache[node->node_id][node->level + 1] && > - (node->size <= > + (node->size >= > ms->numa_state->hmat_cache[node->node_id][node->level + 1]->size)) { > error_setg(errp, "Invalid size=%" PRIu64 ", the size of level=%" PRIu8 > - " should be larger than the size(%" PRIu64 ") of " > + " should be less than the size(%" PRIu64 ") of " > "level=%u", node->size, node->level, > ms->numa_state->hmat_cache[node->node_id] > [node->level + 1]->size, >
On Fri, Aug 21, 2020 at 06:05:19AM -0400, Igor Mammedov wrote: > when QEMU is started like: > > qemu-system-x86_64 -smp 2 -machine hmat=on \ > -m 2G \ > -object memory-backend-ram,size=1G,id=m0 \ > -object memory-backend-ram,size=1G,id=m1 \ > -numa node,nodeid=0,memdev=m0 \ > -numa node,nodeid=1,memdev=m1,initiator=0 \ > -numa cpu,node-id=0,socket-id=0 \ > -numa cpu,node-id=0,socket-id=1 \ > -numa hmat-lb,initiator=0,target=0,hierarchy=memory,data-type=access-latency,latency=5 \ > -numa hmat-lb,initiator=0,target=0,hierarchy=memory,data-type=access-bandwidth,bandwidth=200M \ > -numa hmat-lb,initiator=0,target=1,hierarchy=memory,data-type=access-latency,latency=10 \ > -numa hmat-lb,initiator=0,target=1,hierarchy=memory,data-type=access-bandwidth,bandwidth=100M \ > -numa hmat-cache,node-id=0,size=8K,level=1,associativity=direct,policy=write-back,line=5 \ > -numa hmat-cache,node-id=0,size=16K,level=2,associativity=direct,policy=write-back,line=5 > > it errors out with: > -numa hmat-cache,node-id=0,size=16K,level=2,associativity=direct,policy=write-back,line=5: > Invalid size=16384, the size of level=2 should be less than the size(8192) of level=1 > > which doesn't look right as one would expect that L1 < L2 < L3 ... > Fix it by sawpping relevant size checks. > > Fixes: c412a48d4d91 (numa: Extend CLI to provide memory side cache information) > Signed-off-by: Igor Mammedov <imammedo@redhat.com> Queued, thanks!
diff --git a/hw/core/numa.c b/hw/core/numa.c index d1a94a14f8..f9593ec716 100644 --- a/hw/core/numa.c +++ b/hw/core/numa.c @@ -425,10 +425,10 @@ void parse_numa_hmat_cache(MachineState *ms, NumaHmatCacheOptions *node, if ((node->level > 1) && ms->numa_state->hmat_cache[node->node_id][node->level - 1] && - (node->size >= + (node->size <= ms->numa_state->hmat_cache[node->node_id][node->level - 1]->size)) { error_setg(errp, "Invalid size=%" PRIu64 ", the size of level=%" PRIu8 - " should be less than the size(%" PRIu64 ") of " + " should be larger than the size(%" PRIu64 ") of " "level=%u", node->size, node->level, ms->numa_state->hmat_cache[node->node_id] [node->level - 1]->size, @@ -438,10 +438,10 @@ void parse_numa_hmat_cache(MachineState *ms, NumaHmatCacheOptions *node, if ((node->level < HMAT_LB_LEVELS - 1) && ms->numa_state->hmat_cache[node->node_id][node->level + 1] && - (node->size <= + (node->size >= ms->numa_state->hmat_cache[node->node_id][node->level + 1]->size)) { error_setg(errp, "Invalid size=%" PRIu64 ", the size of level=%" PRIu8 - " should be larger than the size(%" PRIu64 ") of " + " should be less than the size(%" PRIu64 ") of " "level=%u", node->size, node->level, ms->numa_state->hmat_cache[node->node_id] [node->level + 1]->size,
when QEMU is started like: qemu-system-x86_64 -smp 2 -machine hmat=on \ -m 2G \ -object memory-backend-ram,size=1G,id=m0 \ -object memory-backend-ram,size=1G,id=m1 \ -numa node,nodeid=0,memdev=m0 \ -numa node,nodeid=1,memdev=m1,initiator=0 \ -numa cpu,node-id=0,socket-id=0 \ -numa cpu,node-id=0,socket-id=1 \ -numa hmat-lb,initiator=0,target=0,hierarchy=memory,data-type=access-latency,latency=5 \ -numa hmat-lb,initiator=0,target=0,hierarchy=memory,data-type=access-bandwidth,bandwidth=200M \ -numa hmat-lb,initiator=0,target=1,hierarchy=memory,data-type=access-latency,latency=10 \ -numa hmat-lb,initiator=0,target=1,hierarchy=memory,data-type=access-bandwidth,bandwidth=100M \ -numa hmat-cache,node-id=0,size=8K,level=1,associativity=direct,policy=write-back,line=5 \ -numa hmat-cache,node-id=0,size=16K,level=2,associativity=direct,policy=write-back,line=5 it errors out with: -numa hmat-cache,node-id=0,size=16K,level=2,associativity=direct,policy=write-back,line=5: Invalid size=16384, the size of level=2 should be less than the size(8192) of level=1 which doesn't look right as one would expect that L1 < L2 < L3 ... Fix it by sawpping relevant size checks. Fixes: c412a48d4d91 (numa: Extend CLI to provide memory side cache information) Signed-off-by: Igor Mammedov <imammedo@redhat.com> --- CC: jingqi.liu@intel.com hw/core/numa.c | 8 ++++---- 1 file changed, 4 insertions(+), 4 deletions(-)