From patchwork Wed Aug 28 09:07:43 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: David Hildenbrand X-Patchwork-Id: 13780935 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 93A54C54754 for ; Wed, 28 Aug 2024 09:08:45 +0000 (UTC) Received: from localhost ([::1] helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1sjEek-00040d-F0; Wed, 28 Aug 2024 05:07:58 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1sjEei-0003yx-SE for qemu-devel@nongnu.org; Wed, 28 Aug 2024 05:07:56 -0400 Received: from us-smtp-delivery-124.mimecast.com ([170.10.129.124]) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1sjEeg-0005Q6-FP for qemu-devel@nongnu.org; Wed, 28 Aug 2024 05:07:56 -0400 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1724836072; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding; bh=9T+0QtxB3eu7eINqZW6LGDDPvuzvafyjXFudzQSxM10=; b=DbcF6/GzCOpEcVwo1VG2FI7cIWz7nEXKhqkO72oqr3sPTBNMWg2HmcWYh7+/8iOichq5V/ 0ym7ANPe5Ka4fUNZJ+DPWDc2QCsr94+SSw9HmGXzoPl/WYO1LEuqh2GqDelf2hCnBTuNts sOsfY5aSWGLSMuUDTUR+q7LBsKbvn5Y= Received: from mx-prod-mc-04.mail-002.prod.us-west-2.aws.redhat.com (ec2-54-186-198-63.us-west-2.compute.amazonaws.com [54.186.198.63]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-33-uclzDBkIOEOQKhUSclGn9A-1; Wed, 28 Aug 2024 05:07:50 -0400 X-MC-Unique: uclzDBkIOEOQKhUSclGn9A-1 Received: from mx-prod-int-05.mail-002.prod.us-west-2.aws.redhat.com (mx-prod-int-05.mail-002.prod.us-west-2.aws.redhat.com [10.30.177.17]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by mx-prod-mc-04.mail-002.prod.us-west-2.aws.redhat.com (Postfix) with ESMTPS id A6DCF1955D4A; Wed, 28 Aug 2024 09:07:49 +0000 (UTC) Received: from t14s.redhat.com (unknown [10.39.193.37]) by mx-prod-int-05.mail-002.prod.us-west-2.aws.redhat.com (Postfix) with ESMTP id BF2A51956048; Wed, 28 Aug 2024 09:07:44 +0000 (UTC) From: David Hildenbrand To: qemu-devel@nongnu.org Cc: David Hildenbrand , Peter Maydell , Stefan Hajnoczi , Peter Xu , qemu-stable@nongnu.org, Paolo Bonzini , =?utf-8?q?Phil?= =?utf-8?q?ippe_Mathieu-Daud=C3=A9?= Subject: [PATCH v2] softmmu/physmem: fix memory leak in dirty_memory_extend() Date: Wed, 28 Aug 2024 11:07:43 +0200 Message-ID: <20240828090743.128647-1-david@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.0 on 10.30.177.17 Received-SPF: pass client-ip=170.10.129.124; envelope-from=david@redhat.com; helo=us-smtp-delivery-124.mimecast.com X-Spam_score_int: -20 X-Spam_score: -2.1 X-Spam_bar: -- X-Spam_report: (-2.1 / 5.0 requ) BAYES_00=-1.9, DKIMWL_WL_HIGH=-0.001, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, RCVD_IN_DNSWL_NONE=-0.0001, RCVD_IN_MSPIKE_H3=0.001, RCVD_IN_MSPIKE_WL=0.001, RCVD_IN_VALIDITY_CERTIFIED_BLOCKED=0.001, RCVD_IN_VALIDITY_RPBL_BLOCKED=0.001, SPF_HELO_NONE=0.001, SPF_PASS=-0.001, T_SCC_BODY_TEXT_LINE=-0.01 autolearn=ham autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: qemu-devel-bounces+qemu-devel=archiver.kernel.org@nongnu.org Sender: qemu-devel-bounces+qemu-devel=archiver.kernel.org@nongnu.org As reported by Peter, we might be leaking memory when removing the highest RAMBlock (in the weird ram_addr_t space), and adding a new one. We will fail to realize that we already allocated bitmaps for more dirty memory blocks, and effectively discard the pointers to them. Fix it by getting rid of last_ram_page() and by remembering the number of dirty memory blocks that have been allocated already. While at it, let's use "unsigned int" for the number of blocks, which should be sufficient until we reach ~32 exabytes. Looks like this leak was introduced as we switched from using a single bitmap_zero_extend() to allocating multiple bitmaps: bitmap_zero_extend() relies on g_renew() which should have taken care of this. Resolves: https://lkml.kernel.org/r/CAFEAcA-k7a+VObGAfCFNygQNfCKL=AfX6A4kScq=VSSK0peqPg@mail.gmail.com Reported-by: Peter Maydell Fixes: 5b82b703b69a ("memory: RCU ram_list.dirty_memory[] for safe RAM hotplug") Reviewed-by: Stefan Hajnoczi Reviewed-by: Peter Xu Tested-by: Peter Maydell Cc: qemu-stable@nongnu.org Cc: Stefan Hajnoczi Cc: Paolo Bonzini Cc: Peter Xu Cc: "Philippe Mathieu-Daudé" Signed-off-by: David Hildenbrand --- v1 -> v2: * Move the counter to RAMList * Use "unsigned int" instead of "ram_addr_t" as type for the number of blocks --- include/exec/ramlist.h | 1 + system/physmem.c | 35 +++++++++-------------------------- 2 files changed, 10 insertions(+), 26 deletions(-) diff --git a/include/exec/ramlist.h b/include/exec/ramlist.h index 2ad2a81acc..d9cfe530be 100644 --- a/include/exec/ramlist.h +++ b/include/exec/ramlist.h @@ -50,6 +50,7 @@ typedef struct RAMList { /* RCU-enabled, writes protected by the ramlist lock. */ QLIST_HEAD(, RAMBlock) blocks; DirtyMemoryBlocks *dirty_memory[DIRTY_MEMORY_NUM]; + unsigned int num_dirty_blocks; uint32_t version; QLIST_HEAD(, RAMBlockNotifier) ramblock_notifiers; } RAMList; diff --git a/system/physmem.c b/system/physmem.c index 94600a33ec..5e7f066762 100644 --- a/system/physmem.c +++ b/system/physmem.c @@ -1534,18 +1534,6 @@ static ram_addr_t find_ram_offset(ram_addr_t size) return offset; } -static unsigned long last_ram_page(void) -{ - RAMBlock *block; - ram_addr_t last = 0; - - RCU_READ_LOCK_GUARD(); - RAMBLOCK_FOREACH(block) { - last = MAX(last, block->offset + block->max_length); - } - return last >> TARGET_PAGE_BITS; -} - static void qemu_ram_setup_dump(void *addr, ram_addr_t size) { int ret; @@ -1799,13 +1787,11 @@ void qemu_ram_msync(RAMBlock *block, ram_addr_t start, ram_addr_t length) } /* Called with ram_list.mutex held */ -static void dirty_memory_extend(ram_addr_t old_ram_size, - ram_addr_t new_ram_size) +static void dirty_memory_extend(ram_addr_t new_ram_size) { - ram_addr_t old_num_blocks = DIV_ROUND_UP(old_ram_size, - DIRTY_MEMORY_BLOCK_SIZE); - ram_addr_t new_num_blocks = DIV_ROUND_UP(new_ram_size, - DIRTY_MEMORY_BLOCK_SIZE); + unsigned int old_num_blocks = ram_list.num_dirty_blocks; + unsigned int new_num_blocks = DIV_ROUND_UP(new_ram_size, + DIRTY_MEMORY_BLOCK_SIZE); int i; /* Only need to extend if block count increased */ @@ -1837,6 +1823,8 @@ static void dirty_memory_extend(ram_addr_t old_ram_size, g_free_rcu(old_blocks, rcu); } } + + ram_list.num_dirty_blocks = new_num_blocks; } static void ram_block_add(RAMBlock *new_block, Error **errp) @@ -1846,11 +1834,9 @@ static void ram_block_add(RAMBlock *new_block, Error **errp) RAMBlock *block; RAMBlock *last_block = NULL; bool free_on_error = false; - ram_addr_t old_ram_size, new_ram_size; + ram_addr_t ram_size; Error *err = NULL; - old_ram_size = last_ram_page(); - qemu_mutex_lock_ramlist(); new_block->offset = find_ram_offset(new_block->max_length); @@ -1901,11 +1887,8 @@ static void ram_block_add(RAMBlock *new_block, Error **errp) } } - new_ram_size = MAX(old_ram_size, - (new_block->offset + new_block->max_length) >> TARGET_PAGE_BITS); - if (new_ram_size > old_ram_size) { - dirty_memory_extend(old_ram_size, new_ram_size); - } + ram_size = (new_block->offset + new_block->max_length) >> TARGET_PAGE_BITS; + dirty_memory_extend(ram_size); /* Keep the list sorted from biggest to smallest block. Unlike QTAILQ, * QLIST (which has an RCU-friendly variant) does not have insertion at * tail, so save the last element in last_block.