From patchwork Fri Aug 21 10:34:29 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Hildenbrand X-Patchwork-Id: 11728841 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id F209516B1 for ; Fri, 21 Aug 2020 10:36:27 +0000 (UTC) Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id CDE9120738 for ; Fri, 21 Aug 2020 10:36:27 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b="Z+whIEeF" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org CDE9120738 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=redhat.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=xen-devel-bounces@lists.xenproject.org Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1k94O8-0000bE-Bd; Fri, 21 Aug 2020 10:35:12 +0000 Received: from us1-rack-iad1.inumbo.com ([172.99.69.81]) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1k94O7-0000aw-Hq for xen-devel@lists.xenproject.org; Fri, 21 Aug 2020 10:35:11 +0000 X-Inumbo-ID: 7897e7dd-32db-4c44-949f-acfa0afcd6a6 Received: from us-smtp-delivery-1.mimecast.com (unknown [205.139.110.120]) by us1-rack-iad1.inumbo.com (Halon) with ESMTP id 7897e7dd-32db-4c44-949f-acfa0afcd6a6; Fri, 21 Aug 2020 10:35:10 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1598006110; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=a286TBwbZ7ci6RWhKZMWDqKWC78sTVuqH/ttlMVxfUs=; b=Z+whIEeFsJbcZWTC14b1kwV1djiU7QE3/k/MTYSHgu9WuhqsgaFcoOUmQqIzzp55Nc0tqt +JdPHKGFUXjNI5Q8vBZQWYliivKyzHj4ynaBeqK3QS1A8YOWJ/PfNbfNlMmENeFHgk07He JPJh/lRWGKfkImCDkViiVGFEq2N1J/Y= Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-144-Gn4EHG4jOouXnffn-SpIAw-1; Fri, 21 Aug 2020 06:35:05 -0400 X-MC-Unique: Gn4EHG4jOouXnffn-SpIAw-1 Received: from smtp.corp.redhat.com (int-mx02.intmail.prod.int.phx2.redhat.com [10.5.11.12]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx01.redhat.com (Postfix) with ESMTPS id 4707F18A2241; Fri, 21 Aug 2020 10:35:04 +0000 (UTC) Received: from t480s.redhat.com (ovpn-114-87.ams2.redhat.com [10.36.114.87]) by smtp.corp.redhat.com (Postfix) with ESMTP id 449F960BF1; Fri, 21 Aug 2020 10:34:55 +0000 (UTC) From: David Hildenbrand To: linux-kernel@vger.kernel.org Cc: virtualization@lists.linux-foundation.org, linux-mm@kvack.org, linux-hyperv@vger.kernel.org, xen-devel@lists.xenproject.org, David Hildenbrand , Andrew Morton , Michal Hocko , Dan Williams , "Michael S . Tsirkin" , Jason Wang , Pankaj Gupta , Baoquan He , Wei Yang Subject: [PATCH v1 3/5] virtio-mem: try to merge system ram resources Date: Fri, 21 Aug 2020 12:34:29 +0200 Message-Id: <20200821103431.13481-4-david@redhat.com> In-Reply-To: <20200821103431.13481-1-david@redhat.com> References: <20200821103431.13481-1-david@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 2.79 on 10.5.11.12 X-BeenThere: xen-devel@lists.xenproject.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Sender: "Xen-devel" virtio-mem adds memory in memory block granularity, to be able to remove it in the same granularity again later, and to grow slowly on demand. This, however, results in quite a lot of resources when adding a lot of memory. Resources are effectively stored in a list-based tree. Having a lot of resources not only wastes memory, it also makes traversing that tree more expensive, and makes /proc/iomem explode in size (e.g., requiring kexec-tools to manually merge resources later when e.g., trying to create a kdump header). Before this patch, we get (/proc/iomem) when hotplugging 2G via virtio-mem on x86-64: [...] 100000000-13fffffff : System RAM 140000000-33fffffff : virtio0 140000000-147ffffff : System RAM (virtio_mem) 148000000-14fffffff : System RAM (virtio_mem) 150000000-157ffffff : System RAM (virtio_mem) 158000000-15fffffff : System RAM (virtio_mem) 160000000-167ffffff : System RAM (virtio_mem) 168000000-16fffffff : System RAM (virtio_mem) 170000000-177ffffff : System RAM (virtio_mem) 178000000-17fffffff : System RAM (virtio_mem) 180000000-187ffffff : System RAM (virtio_mem) 188000000-18fffffff : System RAM (virtio_mem) 190000000-197ffffff : System RAM (virtio_mem) 198000000-19fffffff : System RAM (virtio_mem) 1a0000000-1a7ffffff : System RAM (virtio_mem) 1a8000000-1afffffff : System RAM (virtio_mem) 1b0000000-1b7ffffff : System RAM (virtio_mem) 1b8000000-1bfffffff : System RAM (virtio_mem) 3280000000-32ffffffff : PCI Bus 0000:00 With this patch, we get (/proc/iomem): [...] fffc0000-ffffffff : Reserved 100000000-13fffffff : System RAM 140000000-33fffffff : virtio0 140000000-1bfffffff : System RAM (virtio_mem) 3280000000-32ffffffff : PCI Bus 0000:00 Of course, with more hotplugged memory, it gets worse. When unplugging memory blocks again, try_remove_memory() (via offline_and_remove_memory()) will properly split the resource up again. Cc: Andrew Morton Cc: Michal Hocko Cc: Dan Williams Cc: Michael S. Tsirkin Cc: Jason Wang Cc: Pankaj Gupta Cc: Baoquan He Cc: Wei Yang Signed-off-by: David Hildenbrand --- drivers/virtio/virtio_mem.c | 14 ++++++++++++-- 1 file changed, 12 insertions(+), 2 deletions(-) diff --git a/drivers/virtio/virtio_mem.c b/drivers/virtio/virtio_mem.c index 834b7c13ef3dc..3aae0f87073a8 100644 --- a/drivers/virtio/virtio_mem.c +++ b/drivers/virtio/virtio_mem.c @@ -407,6 +407,7 @@ static int virtio_mem_mb_add(struct virtio_mem *vm, unsigned long mb_id) { const uint64_t addr = virtio_mem_mb_id_to_phys(mb_id); int nid = vm->nid; + int rc; if (nid == NUMA_NO_NODE) nid = memory_add_physaddr_to_nid(addr); @@ -423,8 +424,17 @@ static int virtio_mem_mb_add(struct virtio_mem *vm, unsigned long mb_id) } dev_dbg(&vm->vdev->dev, "adding memory block: %lu\n", mb_id); - return add_memory_driver_managed(nid, addr, memory_block_size_bytes(), - vm->resource_name); + rc = add_memory_driver_managed(nid, addr, memory_block_size_bytes(), + vm->resource_name); + if (!rc) { + /* + * Try to reduce the number of system ram resources in our + * resource container. The memory removal path will properly + * split them up again. + */ + merge_system_ram_resources(vm->parent_resource); + } + return rc; } /*