From patchwork Fri Sep 21 11:17:20 2012 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Vasilis Liaskovitis X-Patchwork-Id: 1491271 Return-Path: X-Original-To: patchwork-kvm@patchwork.kernel.org Delivered-To: patchwork-process-083081@patchwork2.kernel.org Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by patchwork2.kernel.org (Postfix) with ESMTP id 7BE0BE0012 for ; Fri, 21 Sep 2012 11:17:55 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S932343Ab2IULRt (ORCPT ); Fri, 21 Sep 2012 07:17:49 -0400 Received: from mail-bk0-f46.google.com ([209.85.214.46]:44504 "EHLO mail-bk0-f46.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S932384Ab2IULRp (ORCPT ); Fri, 21 Sep 2012 07:17:45 -0400 Received: by mail-bk0-f46.google.com with SMTP id w11so743835bku.19 for ; Fri, 21 Sep 2012 04:17:45 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20120113; h=from:to:cc:subject:date:message-id:x-mailer:in-reply-to:references :x-gm-message-state; bh=ylFCpRH2LwbwoIzPmaE3NysGBzFZExErBXWqbVEQmww=; b=fo4cCnUy+LEJWIgymeHqpzJW0bLljthy2gldbGrk144LfaPNZ89heN1sg3+5v1TT6U 4AuTyxH4UpjxZe5OWdA5JClUtxQaiktjxRtkR85CYr4P6a8Sl/BL0pJ3R4k6XL9LEWot 0yPAkrz7hYweC/N7bKgSEay3+Vg9X/uruwWLGLCEe6hu0MAwCcEAAzlrmi7gGVx/JLWK 8gIUyENukUdO+f3Pce+KtW6D6mYrHLRg+DXvG/3Bo2lEh6rA3jZRR1MVfFR0BzW/l0ue 9vF85w+njYaMrU5zVN0yYvk1wsMA8XuSRf09Au0+AQPk2fi2KE4ie/tzp768Pe1jzgIf ClaA== Received: by 10.204.157.10 with SMTP id z10mr2037999bkw.63.1348226264705; Fri, 21 Sep 2012 04:17:44 -0700 (PDT) Received: from dhcp-192-168-178-175.ri.profitbricks.localdomain ([62.217.45.26]) by mx.google.com with ESMTPS id x13sm5271435bkv.16.2012.09.21.04.17.43 (version=TLSv1/SSLv3 cipher=OTHER); Fri, 21 Sep 2012 04:17:44 -0700 (PDT) From: Vasilis Liaskovitis To: qemu-devel@nongnu.org, kvm@vger.kernel.org, seabios@seabios.org Cc: avi@redhat.com, anthony@codemonkey.ws, kevin@koconnor.net, wency@cn.fujitsu.com, kraxel@redhat.com, eblake@redhat.com, blauwirbel@gmail.com, gleb@redhat.com, imammedo@redhat.com, Vasilis Liaskovitis Subject: [RFC PATCH v3 04/19][SeaBIOS] acpi: generate hotplug memory devices Date: Fri, 21 Sep 2012 13:17:20 +0200 Message-Id: <1348226255-4226-5-git-send-email-vasilis.liaskovitis@profitbricks.com> X-Mailer: git-send-email 1.7.9 In-Reply-To: <1348226255-4226-1-git-send-email-vasilis.liaskovitis@profitbricks.com> References: <1348226255-4226-1-git-send-email-vasilis.liaskovitis@profitbricks.com> X-Gm-Message-State: ALoCoQl9ExlvTxDz88OyhaT+1fr+0YG2fhgxknTRcXwV1y5GbDgZmjactHGbCLG2KzgLLKdPCuPU Sender: kvm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org The memory device generation is guided by qemu paravirt info. Seabios first uses the info to setup SRAT entries for the hotplug-able memory slots. Afterwards, build_memssdt uses the created SRAT entries to generate appropriate memory device objects. One memory device (and corresponding SRAT entry) is generated for each hotplug-able qemu memslot. Currently no SSDT memory device is created for initial system memory. We only support up to 255 DIMMs for now (PackageOp used for the MEON array can only describe an array of at most 255 elements. VarPackageOp would be needed to support more than 255 devices) v1->v2: Seabios reads mems_sts from qemu to build e820_map SSDT size and some offsets are calculated with extraction macros. v2->v3: Minor name changes Signed-off-by: Vasilis Liaskovitis --- src/acpi.c | 158 +++++++++++++++++++++++++++++++++++++++++++++++++++++++++-- 1 files changed, 152 insertions(+), 6 deletions(-) diff --git a/src/acpi.c b/src/acpi.c index 6d239fa..1223b52 100644 --- a/src/acpi.c +++ b/src/acpi.c @@ -13,6 +13,7 @@ #include "pci_regs.h" // PCI_INTERRUPT_LINE #include "ioport.h" // inl #include "paravirt.h" // qemu_cfg_irq0_override +#include "memmap.h" /****************************************************/ /* ACPI tables init */ @@ -416,11 +417,26 @@ encodeLen(u8 *ssdt_ptr, int length, int bytes) #define PCIHP_AML (ssdp_pcihp_aml + *ssdt_pcihp_start) #define PCI_SLOTS 32 +/* 0x5B 0x82 DeviceOp PkgLength NameString DimmID */ +#define MEM_BASE 0xaf80 +#define MEM_AML (ssdm_mem_aml + *ssdt_mem_start) +#define MEM_SIZEOF (*ssdt_mem_end - *ssdt_mem_start) +#define MEM_OFFSET_HEX (*ssdt_mem_name - *ssdt_mem_start + 2) +#define MEM_OFFSET_ID (*ssdt_mem_id - *ssdt_mem_start) +#define MEM_OFFSET_PXM 31 +#define MEM_OFFSET_START 55 +#define MEM_OFFSET_END 63 +#define MEM_OFFSET_SIZE 79 + +u64 nb_hp_memslots = 0; +struct srat_memory_affinity *mem; + #define SSDT_SIGNATURE 0x54445353 // SSDT #define SSDT_HEADER_LENGTH 36 #include "ssdt-susp.hex" #include "ssdt-pcihp.hex" +#include "ssdt-mem.hex" #define PCI_RMV_BASE 0xae0c @@ -472,6 +488,111 @@ static void patch_pcihp(int slot, u8 *ssdt_ptr, u32 eject) } } +static void build_memdev(u8 *ssdt_ptr, int i, u64 mem_base, u64 mem_len, u8 node) +{ + memcpy(ssdt_ptr, MEM_AML, MEM_SIZEOF); + ssdt_ptr[MEM_OFFSET_HEX] = getHex(i >> 4); + ssdt_ptr[MEM_OFFSET_HEX+1] = getHex(i); + ssdt_ptr[MEM_OFFSET_ID] = i; + ssdt_ptr[MEM_OFFSET_PXM] = node; + *(u64*)(ssdt_ptr + MEM_OFFSET_START) = mem_base; + *(u64*)(ssdt_ptr + MEM_OFFSET_END) = mem_base + mem_len; + *(u64*)(ssdt_ptr + MEM_OFFSET_SIZE) = mem_len; +} + +static void* +build_memssdt(void) +{ + u64 mem_base; + u64 mem_len; + u8 node; + int i; + struct srat_memory_affinity *entry = mem; + u64 nb_memdevs = nb_hp_memslots; + u8 memslot_status, enabled; + + int length = ((1+3+4) + + (nb_memdevs * MEM_SIZEOF) + + (1+2+5+(12*nb_memdevs)) + + (6+2+1+(1*nb_memdevs))); + u8 *ssdt = malloc_high(sizeof(struct acpi_table_header) + length); + if (! ssdt) { + warn_noalloc(); + return NULL; + } + u8 *ssdt_ptr = ssdt + sizeof(struct acpi_table_header); + + // build Scope(_SB_) header + *(ssdt_ptr++) = 0x10; // ScopeOp + ssdt_ptr = encodeLen(ssdt_ptr, length-1, 3); + *(ssdt_ptr++) = '_'; + *(ssdt_ptr++) = 'S'; + *(ssdt_ptr++) = 'B'; + *(ssdt_ptr++) = '_'; + + for (i = 0; i < nb_memdevs; i++) { + mem_base = (((u64)(entry->base_addr_high) << 32 )| entry->base_addr_low); + mem_len = (((u64)(entry->length_high) << 32 )| entry->length_low); + node = entry->proximity[0]; + build_memdev(ssdt_ptr, i, mem_base, mem_len, node); + ssdt_ptr += MEM_SIZEOF; + entry++; + } + + // build "Method(MTFY, 2) {If (LEqual(Arg0, 0x00)) {Notify(CM00, Arg1)} ...}" + *(ssdt_ptr++) = 0x14; // MethodOp + ssdt_ptr = encodeLen(ssdt_ptr, 2+5+(12*nb_memdevs), 2); + *(ssdt_ptr++) = 'M'; + *(ssdt_ptr++) = 'T'; + *(ssdt_ptr++) = 'F'; + *(ssdt_ptr++) = 'Y'; + *(ssdt_ptr++) = 0x02; + for (i=0; i> 4); + *(ssdt_ptr++) = getHex(i); + *(ssdt_ptr++) = 0x69; // Arg1Op + } + + // build "Name(MEON, Package() { One, One, ..., Zero, Zero, ... })" + *(ssdt_ptr++) = 0x08; // NameOp + *(ssdt_ptr++) = 'M'; + *(ssdt_ptr++) = 'E'; + *(ssdt_ptr++) = 'O'; + *(ssdt_ptr++) = 'N'; + *(ssdt_ptr++) = 0x12; // PackageOp + ssdt_ptr = encodeLen(ssdt_ptr, 2+1+(1*nb_memdevs), 2); + *(ssdt_ptr++) = nb_memdevs; + + entry = mem; + memslot_status = 0; + + for (i = 0; i < nb_memdevs; i++) { + enabled = 0; + if (i % 8 == 0) + memslot_status = inb(MEM_BASE + i/8); + enabled = memslot_status & 1; + mem_base = (((u64)(entry->base_addr_high) << 32 )| entry->base_addr_low); + mem_len = (((u64)(entry->length_high) << 32 )| entry->length_low); + *(ssdt_ptr++) = enabled ? 0x01 : 0x00; + if (enabled) + add_e820(mem_base, mem_len, E820_RAM); + memslot_status = memslot_status >> 1; + entry++; + } + build_header((void*)ssdt, SSDT_SIGNATURE, ssdt_ptr - ssdt, 1); + + return ssdt; +} + static void* build_ssdt(void) { @@ -644,9 +765,6 @@ build_srat(void) { int nb_numa_nodes = qemu_cfg_get_numa_nodes(); - if (nb_numa_nodes == 0) - return NULL; - u64 *numadata = malloc_tmphigh(sizeof(u64) * (MaxCountCPUs + nb_numa_nodes)); if (!numadata) { warn_noalloc(); @@ -655,10 +773,11 @@ build_srat(void) qemu_cfg_get_numa_data(numadata, MaxCountCPUs + nb_numa_nodes); + qemu_cfg_get_numa_data(&nb_hp_memslots, 1); struct system_resource_affinity_table *srat; int srat_size = sizeof(*srat) + sizeof(struct srat_processor_affinity) * MaxCountCPUs + - sizeof(struct srat_memory_affinity) * (nb_numa_nodes + 2); + sizeof(struct srat_memory_affinity) * (nb_numa_nodes + nb_hp_memslots + 2); srat = malloc_high(srat_size); if (!srat) { @@ -693,7 +812,7 @@ build_srat(void) * from 640k-1M and possibly another one from 3.5G-4G. */ struct srat_memory_affinity *numamem = (void*)core; - int slots = 0; + int slots = 0, node; u64 mem_len, mem_base, next_base = 0; acpi_build_srat_memory(numamem, 0, 640*1024, 0, 1); @@ -720,10 +839,36 @@ build_srat(void) next_base += (1ULL << 32) - RamSize; } acpi_build_srat_memory(numamem, mem_base, mem_len, i-1, 1); + numamem++; slots++; + } - for (; slots < nb_numa_nodes + 2; slots++) { + mem = (void*)numamem; + + if (nb_hp_memslots) { + u64 *hpmemdata = malloc_tmphigh(sizeof(u64) * (3 * nb_hp_memslots)); + if (!hpmemdata) { + warn_noalloc(); + free(hpmemdata); + free(numadata); + return NULL; + } + + qemu_cfg_get_numa_data(hpmemdata, 3 * nb_hp_memslots); + + for (i = 1; i < nb_hp_memslots + 1; ++i) { + mem_base = *hpmemdata++; + mem_len = *hpmemdata++; + node = *hpmemdata++; + acpi_build_srat_memory(numamem, mem_base, mem_len, node, 1); + numamem++; + slots++; + } + free(hpmemdata); + } + + for (; slots < nb_numa_nodes + nb_hp_memslots + 2; slots++) { acpi_build_srat_memory(numamem, 0, 0, 0, 0); numamem++; } @@ -774,6 +919,7 @@ acpi_bios_init(void) ACPI_INIT_TABLE(build_madt()); ACPI_INIT_TABLE(build_hpet()); ACPI_INIT_TABLE(build_srat()); + ACPI_INIT_TABLE(build_memssdt()); u16 i, external_tables = qemu_cfg_acpi_additional_tables();