From patchwork Thu Nov 12 13:38:01 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Hildenbrand X-Patchwork-Id: 11900261 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id B85A1921 for ; Thu, 12 Nov 2020 13:39:17 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 67EC022240 for ; Thu, 12 Nov 2020 13:39:15 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b="bTYco/53" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 67EC022240 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=redhat.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 270686B0083; Thu, 12 Nov 2020 08:39:14 -0500 (EST) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 247C66B0085; Thu, 12 Nov 2020 08:39:14 -0500 (EST) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 136EB6B0087; Thu, 12 Nov 2020 08:39:14 -0500 (EST) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0198.hostedemail.com [216.40.44.198]) by kanga.kvack.org (Postfix) with ESMTP id D712F6B0083 for ; Thu, 12 Nov 2020 08:39:13 -0500 (EST) Received: from smtpin18.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay05.hostedemail.com (Postfix) with ESMTP id 7D94B181AEF00 for ; Thu, 12 Nov 2020 13:39:13 +0000 (UTC) X-FDA: 77475872586.18.wine47_171085b27306 Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin18.hostedemail.com (Postfix) with ESMTP id 6B974100ED0CB for ; Thu, 12 Nov 2020 13:39:13 +0000 (UTC) X-Spam-Summary: 1,0,0,,d41d8cd98f00b204,david@redhat.com,,RULES_HIT:30012:30054:30056:30070:30091,0,RBL:216.205.24.124:@redhat.com:.lbl8.mailshell.net-64.10.201.10 62.18.0.100;04ygh43oa3nqjqp3zi7453uomykxmocdt5ifdkr9em9rhmsf4z7m3d5ka8m39re.e851xxfwcphz3i8thhd5s8kmcgtxjxbqy18cyyghuxk4riacjqeiidzosqj1aqw.o-lbl8.mailshell.net-223.238.255.100,CacheIP:none,Bayesian:0.5,0.5,0.5,Netcheck:none,DomainCache:0,MSF:not bulk,SPF:ft,MSBL:0,DNSBL:neutral,Custom_rules:0:0:0,LFtime:71,LUA_SUMMARY:none X-HE-Tag: wine47_171085b27306 X-Filterd-Recvd-Size: 6572 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [216.205.24.124]) by imf40.hostedemail.com (Postfix) with ESMTP for ; Thu, 12 Nov 2020 13:39:12 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1605188352; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=Ft1YSvY9BgdFEI8pvmy+sn5jf3hse4l/Z931OuLpsyQ=; b=bTYco/53xpKKx2dFQV6qaWiF+Eyffa2wMWuGqrKdZW7Bb3nMCbsQmT4x4ai0GX//0GrUI7 /UUKeEmbXR9BiBwIjbEKVlHVUfDPC677kBm6y807AS+XuTEJGMWxD8Nh4LBksqYYZ3yuOr yBu5B6RHLpy62AIsjYVqyKzlQ/R53yg= Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-491-bCjUfemBM1CFGjtQ-6RGAA-1; Thu, 12 Nov 2020 08:39:10 -0500 X-MC-Unique: bCjUfemBM1CFGjtQ-6RGAA-1 Received: from smtp.corp.redhat.com (int-mx05.intmail.prod.int.phx2.redhat.com [10.5.11.15]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx01.redhat.com (Postfix) with ESMTPS id 757B9809DE2; Thu, 12 Nov 2020 13:39:09 +0000 (UTC) Received: from t480s.redhat.com (ovpn-115-61.ams2.redhat.com [10.36.115.61]) by smtp.corp.redhat.com (Postfix) with ESMTP id E060A75138; Thu, 12 Nov 2020 13:39:07 +0000 (UTC) From: David Hildenbrand To: linux-kernel@vger.kernel.org Cc: virtualization@lists.linux-foundation.org, linux-mm@kvack.org, "Michael S . Tsirkin" , David Hildenbrand , Jason Wang , Pankaj Gupta Subject: [PATCH v2 15/29] virtio-mem: don't always trigger the workqueue when offlining memory Date: Thu, 12 Nov 2020 14:38:01 +0100 Message-Id: <20201112133815.13332-16-david@redhat.com> In-Reply-To: <20201112133815.13332-1-david@redhat.com> References: <20201112133815.13332-1-david@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 2.79 on 10.5.11.15 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Let's trigger from offlining code only when we're not allowed to unplug online memory. Handle the other case (memmap possibly freeing up another memory block) when actually removing memory. We now also properly handle the case when removing already offline memory blocks via virtio_mem_mb_remove(). When removing via virtio_mem_remove(), when unloading the driver, virtio_mem_retry() is a NOP and safe to use. While at it, move retry handling when offlining out of virtio_mem_notify_offline(), to share it with Big Block Mode (BBM) soon. This is a preparation for Big Block Mode (BBM), whereby we can see some temporary offlining of memory blocks without actually making progress. Imagine you have a Big Block that spans to Linux memory blocks. Assume the first Linux memory blocks has no unmovable data on it. When we would call offline_and_remove_memory() on the big block, we would 1. Try to offline the first block. Works, notifiers triggered. virtio_mem_retry() called. 2. Try to offline the second block. Does not work. 3. Re-online first block. 4. Exit to main loop, exit workqueue. 5. Retry immediately (due to virtio_mem_retry()), go to 1. The result are endless retries. Cc: "Michael S. Tsirkin" Cc: Jason Wang Cc: Pankaj Gupta Signed-off-by: David Hildenbrand --- drivers/virtio/virtio_mem.c | 40 ++++++++++++++++++++++++++----------- 1 file changed, 28 insertions(+), 12 deletions(-) diff --git a/drivers/virtio/virtio_mem.c b/drivers/virtio/virtio_mem.c index a7beac5942e0..f86654af8b6b 100644 --- a/drivers/virtio/virtio_mem.c +++ b/drivers/virtio/virtio_mem.c @@ -162,6 +162,7 @@ static void virtio_mem_fake_offline_going_offline(unsigned long pfn, unsigned long nr_pages); static void virtio_mem_fake_offline_cancel_offline(unsigned long pfn, unsigned long nr_pages); +static void virtio_mem_retry(struct virtio_mem *vm); /* * Register a virtio-mem device so it will be considered for the online_page @@ -447,9 +448,17 @@ static int virtio_mem_mb_add(struct virtio_mem *vm, unsigned long mb_id) static int virtio_mem_mb_remove(struct virtio_mem *vm, unsigned long mb_id) { const uint64_t addr = virtio_mem_mb_id_to_phys(mb_id); + int rc; dev_dbg(&vm->vdev->dev, "removing memory block: %lu\n", mb_id); - return remove_memory(vm->nid, addr, memory_block_size_bytes()); + rc = remove_memory(vm->nid, addr, memory_block_size_bytes()); + if (!rc) + /* + * We might have freed up memory we can now unplug, retry + * immediately instead of waiting. + */ + virtio_mem_retry(vm); + return rc; } /* @@ -464,11 +473,19 @@ static int virtio_mem_mb_offline_and_remove(struct virtio_mem *vm, unsigned long mb_id) { const uint64_t addr = virtio_mem_mb_id_to_phys(mb_id); + int rc; dev_dbg(&vm->vdev->dev, "offlining and removing memory block: %lu\n", mb_id); - return offline_and_remove_memory(vm->nid, addr, - memory_block_size_bytes()); + rc = offline_and_remove_memory(vm->nid, addr, + memory_block_size_bytes()); + if (!rc) + /* + * We might have freed up memory we can now unplug, retry + * immediately instead of waiting. + */ + virtio_mem_retry(vm); + return rc; } /* @@ -546,15 +563,6 @@ static void virtio_mem_notify_offline(struct virtio_mem *vm, BUG(); break; } - - /* - * Trigger the workqueue, maybe we can now unplug memory. Also, - * when we offline and remove a memory block, this will re-trigger - * us immediately - which is often nice because the removal of - * the memory block (e.g., memmap) might have freed up memory - * on other memory blocks we manage. - */ - virtio_mem_retry(vm); } static void virtio_mem_notify_online(struct virtio_mem *vm, unsigned long mb_id) @@ -672,6 +680,14 @@ static int virtio_mem_memory_notifier_cb(struct notifier_block *nb, break; case MEM_OFFLINE: virtio_mem_notify_offline(vm, mb_id); + + /* + * Trigger the workqueue. Now that we have some offline memory, + * maybe we can handle pending unplug requests. + */ + if (!unplug_online) + virtio_mem_retry(vm); + vm->hotplug_active = false; mutex_unlock(&vm->hotplug_mutex); break;