From patchwork Fri Dec 18 13:53:24 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Greg Kurz X-Patchwork-Id: 11982047 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-13.8 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id CE289C4361B for ; Fri, 18 Dec 2020 14:12:35 +0000 (UTC) Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 7FB0723AC2 for ; Fri, 18 Dec 2020 14:12:35 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 7FB0723AC2 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=kaod.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=qemu-devel-bounces+qemu-devel=archiver.kernel.org@nongnu.org Received: from localhost ([::1]:48608 helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1kqGUk-0001jE-Lh for qemu-devel@archiver.kernel.org; Fri, 18 Dec 2020 09:12:34 -0500 Received: from eggs.gnu.org ([2001:470:142:3::10]:60778) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1kqGCO-0003rs-DS for qemu-devel@nongnu.org; Fri, 18 Dec 2020 08:53:36 -0500 Received: from us-smtp-delivery-44.mimecast.com ([207.211.30.44]:24644) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_CBC_SHA1:256) (Exim 4.90_1) (envelope-from ) id 1kqGCM-0002m9-N0 for qemu-devel@nongnu.org; Fri, 18 Dec 2020 08:53:36 -0500 Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-201-DjXegSJ1MNeNB6a7gXHIuw-1; Fri, 18 Dec 2020 08:53:28 -0500 X-MC-Unique: DjXegSJ1MNeNB6a7gXHIuw-1 Received: from smtp.corp.redhat.com (int-mx01.intmail.prod.int.phx2.redhat.com [10.5.11.11]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx01.redhat.com (Postfix) with ESMTPS id BEDB310054FF; Fri, 18 Dec 2020 13:53:26 +0000 (UTC) Received: from bahia.lan (ovpn-114-254.ams2.redhat.com [10.36.114.254]) by smtp.corp.redhat.com (Postfix) with ESMTP id 330842C01B; Fri, 18 Dec 2020 13:53:25 +0000 (UTC) Subject: [PATCH] spapr: Fix buffer overflow in spapr_numa_associativity_init() From: Greg Kurz To: qemu-devel@nongnu.org Date: Fri, 18 Dec 2020 14:53:24 +0100 Message-ID: <160829960428.734871.12634150161215429514.stgit@bahia.lan> User-Agent: StGit/0.21 MIME-Version: 1.0 X-Scanned-By: MIMEDefang 2.79 on 10.5.11.11 X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: kaod.org Received-SPF: softfail client-ip=207.211.30.44; envelope-from=groug@kaod.org; helo=us-smtp-delivery-44.mimecast.com X-Spam_score_int: -11 X-Spam_score: -1.2 X-Spam_bar: - X-Spam_report: (-1.2 / 5.0 requ) BAYES_00=-1.9, SPF_HELO_NONE=0.001, SPF_SOFTFAIL=0.665 autolearn=no autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Daniel Henrique Barboza , qemu-ppc@nongnu.org, qemu-stable@nongnu.org, David Gibson Errors-To: qemu-devel-bounces+qemu-devel=archiver.kernel.org@nongnu.org Sender: "Qemu-devel" Running a guest with 128 NUMA nodes crashes QEMU: ../../util/error.c:59: error_setv: Assertion `*errp == NULL' failed. The crash happens when setting the FWNMI migration blocker: 2861 if (spapr_get_cap(spapr, SPAPR_CAP_FWNMI) == SPAPR_CAP_ON) { 2862 /* Create the error string for live migration blocker */ 2863 error_setg(&spapr->fwnmi_migration_blocker, 2864 "A machine check is being handled during migration. The handler" 2865 "may run and log hardware error on the destination"); 2866 } Inspection reveals that papr->fwnmi_migration_blocker isn't NULL: (gdb) p spapr->fwnmi_migration_blocker $1 = (Error *) 0x8000000004000000 Since this is the only place where papr->fwnmi_migration_blocker is set, this means someone wrote there in our back. Further analysis points to spapr_numa_associativity_init(), especially the part that initializes the associative arrays for NVLink GPUs: max_nodes_with_gpus = nb_numa_nodes + NVGPU_MAX_NUM; ie. max_nodes_with_gpus = 128 + 6, but the array isn't sized to accommodate the 6 extra nodes: #define MAX_NODES 128 struct SpaprMachineState { . . . uint32_t numa_assoc_array[MAX_NODES][NUMA_ASSOC_SIZE]; Error *fwnmi_migration_blocker; }; and the following loops happily overwrite spapr->fwnmi_migration_blocker, and probably more: for (i = nb_numa_nodes; i < max_nodes_with_gpus; i++) { spapr->numa_assoc_array[i][0] = cpu_to_be32(MAX_DISTANCE_REF_POINTS); for (j = 1; j < MAX_DISTANCE_REF_POINTS; j++) { uint32_t gpu_assoc = smc->pre_5_1_assoc_refpoints ? SPAPR_GPU_NUMA_ID : cpu_to_be32(i); spapr->numa_assoc_array[i][j] = gpu_assoc; } spapr->numa_assoc_array[i][MAX_DISTANCE_REF_POINTS] = cpu_to_be32(i); } Fix the size of the array. This requires "hw/ppc/spapr.h" to see NVGPU_MAX_NUM. Including "hw/pci-host/spapr.h" introduces a circular dependency that breaks the build, so this moves the definition of NVGPU_MAX_NUM to "hw/ppc/spapr.h" instead. Reported-by: Min Deng BugLink: https://bugzilla.redhat.com/show_bug.cgi?id=1908693 Fixes: dd7e1d7ae431 ("spapr_numa: move NVLink2 associativity handling to spapr_numa.c") Cc: danielhb413@gmail.com Signed-off-by: Greg Kurz --- include/hw/pci-host/spapr.h | 2 -- include/hw/ppc/spapr.h | 5 ++++- 2 files changed, 4 insertions(+), 3 deletions(-) diff --git a/include/hw/pci-host/spapr.h b/include/hw/pci-host/spapr.h index 4f58f0223b56..bd014823a933 100644 --- a/include/hw/pci-host/spapr.h +++ b/include/hw/pci-host/spapr.h @@ -115,8 +115,6 @@ struct SpaprPhbState { #define SPAPR_PCI_NV2RAM64_WIN_BASE SPAPR_PCI_LIMIT #define SPAPR_PCI_NV2RAM64_WIN_SIZE (2 * TiB) /* For up to 6 GPUs 256GB each */ -/* Max number of these GPUsper a physical box */ -#define NVGPU_MAX_NUM 6 /* Max number of NVLinks per GPU in any physical box */ #define NVGPU_MAX_LINKS 3 diff --git a/include/hw/ppc/spapr.h b/include/hw/ppc/spapr.h index 06a5b4259f20..1cc19575f548 100644 --- a/include/hw/ppc/spapr.h +++ b/include/hw/ppc/spapr.h @@ -112,6 +112,9 @@ typedef enum { #define NUMA_ASSOC_SIZE (MAX_DISTANCE_REF_POINTS + 1) #define VCPU_ASSOC_SIZE (NUMA_ASSOC_SIZE + 1) +/* Max number of these GPUsper a physical box */ +#define NVGPU_MAX_NUM 6 + typedef struct SpaprCapabilities SpaprCapabilities; struct SpaprCapabilities { uint8_t caps[SPAPR_CAP_NUM]; @@ -240,7 +243,7 @@ struct SpaprMachineState { unsigned gpu_numa_id; SpaprTpmProxy *tpm_proxy; - uint32_t numa_assoc_array[MAX_NODES][NUMA_ASSOC_SIZE]; + uint32_t numa_assoc_array[MAX_NODES + NVGPU_MAX_NUM][NUMA_ASSOC_SIZE]; Error *fwnmi_migration_blocker; };