From patchwork Thu Mar 28 01:07:15 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Nadav Amit X-Patchwork-Id: 10873801 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id A1781922 for ; Wed, 27 Mar 2019 17:09:39 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 8493928646 for ; Wed, 27 Mar 2019 17:09:39 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 78B3D287E2; Wed, 27 Mar 2019 17:09:39 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-1.0 required=2.0 tests=BAYES_00,DATE_IN_FUTURE_06_12, MAILING_LIST_MULTI,RCVD_IN_DNSWL_NONE autolearn=ham version=3.3.1 Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id AB11E28646 for ; Wed, 27 Mar 2019 17:09:38 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 0F6086B0008; Wed, 27 Mar 2019 13:09:33 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 076956B0010; Wed, 27 Mar 2019 13:09:32 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id C8CE16B000C; Wed, 27 Mar 2019 13:09:32 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from mail-pg1-f199.google.com (mail-pg1-f199.google.com [209.85.215.199]) by kanga.kvack.org (Postfix) with ESMTP id 779886B0007 for ; Wed, 27 Mar 2019 13:09:32 -0400 (EDT) Received: by mail-pg1-f199.google.com with SMTP id e12so10375666pgh.2 for ; Wed, 27 Mar 2019 10:09:32 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-original-authentication-results:x-gm-message-state:from:to:cc :subject:date:message-id:in-reply-to:references:mime-version :content-transfer-encoding; bh=y6cinH3ZOurbKjZ1YP49eubQN0hSHHiTFTJ4zmUIiZU=; b=oO4e7y4mKCXVXqnRPTFYb3rmZBq3kTbUWjLvkO7/lfFN9QCaI/FzZFUYyS4nvHxaxa wGzYqdjn5RXZjyExybVlGtTqG2mmimXj94CT8F8ceoVh/aNUS6B/AUAKwz42SrcYDWrO 1Ve7l08BfSURcBaZStX7+MaOnGFUUc7goR2b79vEU6s0Gk2YY8xYwYEphG43j2o2ZCvl jNbXF/TqZmVQyqGfMYytcd3JMWTTC9ny/tidxWGxp3eqJ0muAEwTKxPgiovMueDm41pK OYEwuk295IN2tdLQE/1JHJN3iAfFFeRFo3k0baWmkKn0Bl/pAoH5/vq7n+wtypBGZ8cW Ppjw== X-Original-Authentication-Results: mx.google.com; spf=pass (google.com: domain of namit@vmware.com designates 208.91.0.189 as permitted sender) smtp.mailfrom=namit@vmware.com; dmarc=pass (p=QUARANTINE sp=QUARANTINE dis=NONE) header.from=vmware.com X-Gm-Message-State: APjAAAVJY7zO33z7r47d+118J+aVGip04GMQCnYwhYJ9zgA2uMYe2Xnf QmXHgB/n7Y4L0CIPOTuGXpZwWOqiHU3V2wH7rnhJFB3eC/j4geaKHg6+Db8s+CY30jctevQuyDl kGxZu7NqsjpQfYsLQN1raDLvZktopR31BOv3j/RwLG9EKIoMBBzAHMqgVhJhBP1eV8A== X-Received: by 2002:a17:902:b609:: with SMTP id b9mr38065897pls.134.1553706572015; Wed, 27 Mar 2019 10:09:32 -0700 (PDT) X-Google-Smtp-Source: APXvYqz8GKVMTAunZth4YG1NvvYOcu0ZOHOnG+yMEYaK2qaJvLVUhcqk4fFT8b4lfgAENOviDfJW X-Received: by 2002:a17:902:b609:: with SMTP id b9mr38065773pls.134.1553706570608; Wed, 27 Mar 2019 10:09:30 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1553706570; cv=none; d=google.com; s=arc-20160816; b=waEI1n8IVzqtqhrL/ITbV0j0KZNk2491IYrSL/ROfwn7ywiEKAS+85KUYpgTs/ZuDv 2K0ZPs6KmPtVKcKTQK1WYaRxDbAf8qUJYhOdageVgtTWhG815dSc+iALIKB97EAJKrKI 49VS9bkpJ0PHLsh6jg5PSNf8LYzbkZs2hZxK3UD24+HpDqf/Fxo2BB+96IKHo8lO9ULP acGzdJGT7bMUEvy8TzSXpA0gWUA71lZGPMwUgNadosH6+ytvb8923tzB4SJ2Pb4u9jpG +X3OqMlmJ2b66jAZtkwJMnaYZqk6M34adReMKGeBmR6mf4m03xGh6n+HrVU/SSLwxjaT hA7Q== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from; bh=y6cinH3ZOurbKjZ1YP49eubQN0hSHHiTFTJ4zmUIiZU=; b=vRpw2Lo2fJSM6QFmcYh0efgq/SrmyxKL/iPBZ5pHftisqcs4pSUYCQCIhrdxTMKKNT G8RmKFAZv3kMClHQ9jrIxzlKS0diTmEWdr0wPVqrxdcWd0CfnpB7M/ZIYKwDCsCWtrhK kxJoILfwWI0WUcKH6ULfjXY1FzrrY+ImkoVL4ik1+NBzY85eJJG+waIo1h7+d0+Q7P+d 92AdRWwLum8CmBQiRAX+1ldWJtVurl/mw+y5o94b4nNiKQfp4qLZhPpL6guwajiQfAFS Hh2Fz9a7I1NuUJCpphDrLGl/IqgW1AeBwmtelSzv/fiKHuVKsFpPryKsXQiXYBa26ysl 6w0A== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of namit@vmware.com designates 208.91.0.189 as permitted sender) smtp.mailfrom=namit@vmware.com; dmarc=pass (p=QUARANTINE sp=QUARANTINE dis=NONE) header.from=vmware.com Received: from EX13-EDG-OU-001.vmware.com (ex13-edg-ou-001.vmware.com. [208.91.0.189]) by mx.google.com with ESMTPS id n22si19678465plp.296.2019.03.27.10.09.30 for (version=TLS1_2 cipher=ECDHE-RSA-AES128-SHA bits=128/128); Wed, 27 Mar 2019 10:09:30 -0700 (PDT) Received-SPF: pass (google.com: domain of namit@vmware.com designates 208.91.0.189 as permitted sender) client-ip=208.91.0.189; Authentication-Results: mx.google.com; spf=pass (google.com: domain of namit@vmware.com designates 208.91.0.189 as permitted sender) smtp.mailfrom=namit@vmware.com; dmarc=pass (p=QUARANTINE sp=QUARANTINE dis=NONE) header.from=vmware.com Received: from sc9-mailhost2.vmware.com (10.113.161.72) by EX13-EDG-OU-001.vmware.com (10.113.208.155) with Microsoft SMTP Server id 15.0.1156.6; Wed, 27 Mar 2019 10:09:16 -0700 Received: from namit-esx4.eng.vmware.com (sc2-hs2-general-dhcp-219-51.eng.vmware.com [10.172.219.51]) by sc9-mailhost2.vmware.com (Postfix) with ESMTP id AC3EEB2125; Wed, 27 Mar 2019 13:09:29 -0400 (EDT) From: Nadav Amit To: Greg Kroah-Hartman , Arnd Bergmann CC: "Michael S. Tsirkin" , Jason Wang , , , , "VMware, Inc." , Julien Freche , Nadav Amit , Nadav Amit Subject: [PATCH v2 1/4] mm/balloon_compaction: list interfaces Date: Thu, 28 Mar 2019 01:07:15 +0000 Message-ID: <20190328010718.2248-2-namit@vmware.com> X-Mailer: git-send-email 2.19.1 In-Reply-To: <20190328010718.2248-1-namit@vmware.com> References: <20190328010718.2248-1-namit@vmware.com> MIME-Version: 1.0 Received-SPF: None (EX13-EDG-OU-001.vmware.com: namit@vmware.com does not designate permitted sender hosts) X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: X-Virus-Scanned: ClamAV using ClamSMTP Introduce interfaces for ballooning enqueueing and dequeueing of a list of pages. These interfaces reduce the overhead of storing and restoring IRQs by batching the operations. In addition they do not panic if the list of pages is empty. Cc: "Michael S. Tsirkin" Cc: Jason Wang Cc: linux-mm@kvack.org Cc: virtualization@lists.linux-foundation.org Reviewed-by: Xavier Deguillard Signed-off-by: Nadav Amit --- include/linux/balloon_compaction.h | 4 + mm/balloon_compaction.c | 145 +++++++++++++++++++++-------- 2 files changed, 111 insertions(+), 38 deletions(-) diff --git a/include/linux/balloon_compaction.h b/include/linux/balloon_compaction.h index f111c780ef1d..1da79edadb69 100644 --- a/include/linux/balloon_compaction.h +++ b/include/linux/balloon_compaction.h @@ -64,6 +64,10 @@ extern struct page *balloon_page_alloc(void); extern void balloon_page_enqueue(struct balloon_dev_info *b_dev_info, struct page *page); extern struct page *balloon_page_dequeue(struct balloon_dev_info *b_dev_info); +extern size_t balloon_page_list_enqueue(struct balloon_dev_info *b_dev_info, + struct list_head *pages); +extern size_t balloon_page_list_dequeue(struct balloon_dev_info *b_dev_info, + struct list_head *pages, int n_req_pages); static inline void balloon_devinfo_init(struct balloon_dev_info *balloon) { diff --git a/mm/balloon_compaction.c b/mm/balloon_compaction.c index ef858d547e2d..88d5d9a01072 100644 --- a/mm/balloon_compaction.c +++ b/mm/balloon_compaction.c @@ -10,6 +10,106 @@ #include #include +static int balloon_page_enqueue_one(struct balloon_dev_info *b_dev_info, + struct page *page) +{ + /* + * Block others from accessing the 'page' when we get around to + * establishing additional references. We should be the only one + * holding a reference to the 'page' at this point. + */ + if (!trylock_page(page)) { + WARN_ONCE(1, "balloon inflation failed to enqueue page\n"); + return -EFAULT; + } + list_del(&page->lru); + balloon_page_insert(b_dev_info, page); + unlock_page(page); + __count_vm_event(BALLOON_INFLATE); + return 0; +} + +/** + * balloon_page_list_enqueue() - inserts a list of pages into the balloon page + * list. + * @b_dev_info: balloon device descriptor where we will insert a new page to + * @pages: pages to enqueue - allocated using balloon_page_alloc. + * + * Driver must call it to properly enqueue a balloon pages before definitively + * removing it from the guest system. + * + * Return: number of pages that were enqueued. + */ +size_t balloon_page_list_enqueue(struct balloon_dev_info *b_dev_info, + struct list_head *pages) +{ + struct page *page, *tmp; + unsigned long flags; + size_t n_pages = 0; + + spin_lock_irqsave(&b_dev_info->pages_lock, flags); + list_for_each_entry_safe(page, tmp, pages, lru) { + balloon_page_enqueue_one(b_dev_info, page); + n_pages++; + } + spin_unlock_irqrestore(&b_dev_info->pages_lock, flags); + return n_pages; +} +EXPORT_SYMBOL_GPL(balloon_page_list_enqueue); + +/** + * balloon_page_list_dequeue() - removes pages from balloon's page list and + * returns a list of the pages. + * @b_dev_info: balloon device decriptor where we will grab a page from. + * @pages: pointer to the list of pages that would be returned to the caller. + * @n_req_pages: number of requested pages. + * + * Driver must call it to properly de-allocate a previous enlisted balloon pages + * before definetively releasing it back to the guest system. This function + * tries to remove @n_req_pages from the ballooned pages and return it to the + * caller in the @pages list. + * + * Note that this function may fail to dequeue some pages temporarily empty due + * to compaction isolated pages. + * + * Return: number of pages that were added to the @pages list. + */ +size_t balloon_page_list_dequeue(struct balloon_dev_info *b_dev_info, + struct list_head *pages, int n_req_pages) +{ + struct page *page, *tmp; + unsigned long flags; + size_t n_pages = 0; + + spin_lock_irqsave(&b_dev_info->pages_lock, flags); + list_for_each_entry_safe(page, tmp, &b_dev_info->pages, lru) { + /* + * Block others from accessing the 'page' while we get around + * establishing additional references and preparing the 'page' + * to be released by the balloon driver. + */ + if (!trylock_page(page)) + continue; + + if (IS_ENABLED(CONFIG_BALLOON_COMPACTION) && + PageIsolated(page)) { + /* raced with isolation */ + unlock_page(page); + continue; + } + balloon_page_delete(page); + __count_vm_event(BALLOON_DEFLATE); + unlock_page(page); + list_add(&page->lru, pages); + if (++n_pages >= n_req_pages) + break; + } + spin_unlock_irqrestore(&b_dev_info->pages_lock, flags); + + return n_pages; +} +EXPORT_SYMBOL_GPL(balloon_page_list_dequeue); + /* * balloon_page_alloc - allocates a new page for insertion into the balloon * page list. @@ -43,17 +143,9 @@ void balloon_page_enqueue(struct balloon_dev_info *b_dev_info, { unsigned long flags; - /* - * Block others from accessing the 'page' when we get around to - * establishing additional references. We should be the only one - * holding a reference to the 'page' at this point. - */ - BUG_ON(!trylock_page(page)); spin_lock_irqsave(&b_dev_info->pages_lock, flags); - balloon_page_insert(b_dev_info, page); - __count_vm_event(BALLOON_INFLATE); + balloon_page_enqueue_one(b_dev_info, page); spin_unlock_irqrestore(&b_dev_info->pages_lock, flags); - unlock_page(page); } EXPORT_SYMBOL_GPL(balloon_page_enqueue); @@ -70,36 +162,13 @@ EXPORT_SYMBOL_GPL(balloon_page_enqueue); */ struct page *balloon_page_dequeue(struct balloon_dev_info *b_dev_info) { - struct page *page, *tmp; unsigned long flags; - bool dequeued_page; + LIST_HEAD(pages); + int n_pages; - dequeued_page = false; - spin_lock_irqsave(&b_dev_info->pages_lock, flags); - list_for_each_entry_safe(page, tmp, &b_dev_info->pages, lru) { - /* - * Block others from accessing the 'page' while we get around - * establishing additional references and preparing the 'page' - * to be released by the balloon driver. - */ - if (trylock_page(page)) { -#ifdef CONFIG_BALLOON_COMPACTION - if (PageIsolated(page)) { - /* raced with isolation */ - unlock_page(page); - continue; - } -#endif - balloon_page_delete(page); - __count_vm_event(BALLOON_DEFLATE); - unlock_page(page); - dequeued_page = true; - break; - } - } - spin_unlock_irqrestore(&b_dev_info->pages_lock, flags); + n_pages = balloon_page_list_dequeue(b_dev_info, &pages, 1); - if (!dequeued_page) { + if (n_pages != 1) { /* * If we are unable to dequeue a balloon page because the page * list is empty and there is no isolated pages, then something @@ -112,9 +181,9 @@ struct page *balloon_page_dequeue(struct balloon_dev_info *b_dev_info) !b_dev_info->isolated_pages)) BUG(); spin_unlock_irqrestore(&b_dev_info->pages_lock, flags); - page = NULL; + return NULL; } - return page; + return list_first_entry(&pages, struct page, lru); } EXPORT_SYMBOL_GPL(balloon_page_dequeue); From patchwork Thu Mar 28 01:07:16 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Nadav Amit X-Patchwork-Id: 10873805 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id DC0E3186D for ; Wed, 27 Mar 2019 17:09:46 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id BC03D28646 for ; Wed, 27 Mar 2019 17:09:46 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id AE63428936; Wed, 27 Mar 2019 17:09:46 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-1.0 required=2.0 tests=BAYES_00,DATE_IN_FUTURE_06_12, MAILING_LIST_MULTI,RCVD_IN_DNSWL_NONE autolearn=ham version=3.3.1 Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id A561228646 for ; Wed, 27 Mar 2019 17:09:45 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 025036B000D; Wed, 27 Mar 2019 13:09:34 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id E5B6F6B000E; Wed, 27 Mar 2019 13:09:33 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id D23116B0010; Wed, 27 Mar 2019 13:09:33 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from mail-pl1-f199.google.com (mail-pl1-f199.google.com [209.85.214.199]) by kanga.kvack.org (Postfix) with ESMTP id 869EC6B000D for ; Wed, 27 Mar 2019 13:09:33 -0400 (EDT) Received: by mail-pl1-f199.google.com with SMTP id q18so542956pll.16 for ; Wed, 27 Mar 2019 10:09:33 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-original-authentication-results:x-gm-message-state:from:to:cc :subject:date:message-id:in-reply-to:references:mime-version :content-transfer-encoding; bh=yWAUhIXNJL73/EZGrKHESCQJPbjNfYR6LxwmblUx9PA=; b=cNU6xzKVIlpaRBypCLErbTDplykNnz8R/z26t/QscbAl1stpxcM950qj3delFPseii qI4eixIIFHXX+C0KWiYnBfDgXqAZlQ6IkyueUWtS+Y7/OR7GOfe17s8HKGQktGc5aKRT s/z/7x9N1iNa/dkZlvZnT30LtjiPR34vOwLxBAL3jYyRpDd1hlsnK6gsm+O+AxTUy3yH siReWGs4Pt4C0IpUGip499HEKzjRVKkg0iOSwqdLl19f4LS7Zdkz1U2pyGKmhT948fuK GHZRgsdHshY1vZSPHNGLmOgfirqCxy9Gy4eb9Q53GhFenar44qo/lxQ9d/UAqfOqjgQo AUWQ== X-Original-Authentication-Results: mx.google.com; spf=pass (google.com: domain of namit@vmware.com designates 208.91.0.189 as permitted sender) smtp.mailfrom=namit@vmware.com; dmarc=pass (p=QUARANTINE sp=QUARANTINE dis=NONE) header.from=vmware.com X-Gm-Message-State: APjAAAXMtNUOci6tX8kZi1p78k7/cdrh8/O4hByC0PGVTHhhZwXIKFoE k+uZ8ASZIcHykbV+Sv3ve4wt+dRymDkus7sOjSg2sKyvjY8ZZAVeCU/j8z0zlumdAQBM3mo23fC q70ceZ7DL+WZjjTe8pnigqpTGD9FHkhyDUp8JN9FLHLGNtuy0o4uvvStThvvZAcLHJQ== X-Received: by 2002:a63:7444:: with SMTP id e4mr16808407pgn.261.1553706573168; Wed, 27 Mar 2019 10:09:33 -0700 (PDT) X-Google-Smtp-Source: APXvYqyL5aieFTTKctDCRsNw9pDT2RjOESX8iFZ4/S96+Te94WElwGtAmuoUV30z86KZJ5O0KTbe X-Received: by 2002:a63:7444:: with SMTP id e4mr16808227pgn.261.1553706571266; Wed, 27 Mar 2019 10:09:31 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1553706571; cv=none; d=google.com; s=arc-20160816; b=0U5l+Sb2eFoIo1HRemaxyjwNjQWoXwTAd6mCXDsFjQbm0F9qfT5ny6wFTGksu7uWQI GFOzBD1Dh/jgiUXU/KMVyRJUenjykISmTD4VwSuNEZwf+4xFQs6ZH/Df6lu5wnzotMo1 IUAolTjNgrZknNi/wtR1gVG9ADS6FtlAy2jVimIIYvaaf1kXT7HerKTsIvH/ahdMt9P6 pZZGWSFxuW3MpoRU434dqTZx3+CSpSWLGTy82SIuOagRM+lhBc8TprtJCtFy5FjqjjgH jlC/Fy1rUH+YN/hOmRLGzgWOeJUWS8QNgGlbvvRUdee/PnoNAyBwg6zsWJptDWEba3He halQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from; bh=yWAUhIXNJL73/EZGrKHESCQJPbjNfYR6LxwmblUx9PA=; b=eYw2T4M9gKWVky4Vdos8PibVmDwxe2UPQuI54Yi7RntPnoqAmz6uhc3mcpZQlgXVji 5FdeCnLEpFPKiWivU3leB8NTpuBnV6hR/Gdhc9Tpu1FK5/rNko6xQ28pt0GozOF6aBNR UFeTgqMy36O5SZLHMN0XV2Rxp22ZeDWezBuAMVSPlMSr5Us2roR1rBUt5ffy2+3abaEn 2W1MJWYVJ+YM8P3bjNd9h8MPf0yJL+KdzWHe0nahVJ1SHeUNMxHuWt/H3nYYnVgVZ3Ub vIZWFV9gwRykTdEwqvG9VWdM9rrrJ5sh5/BrRNZ36UhBp7e1J918pzBXo8xaf0EaUKOT ogNQ== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of namit@vmware.com designates 208.91.0.189 as permitted sender) smtp.mailfrom=namit@vmware.com; dmarc=pass (p=QUARANTINE sp=QUARANTINE dis=NONE) header.from=vmware.com Received: from EX13-EDG-OU-001.vmware.com (ex13-edg-ou-001.vmware.com. [208.91.0.189]) by mx.google.com with ESMTPS id n22si19678465plp.296.2019.03.27.10.09.31 for (version=TLS1_2 cipher=ECDHE-RSA-AES128-SHA bits=128/128); Wed, 27 Mar 2019 10:09:31 -0700 (PDT) Received-SPF: pass (google.com: domain of namit@vmware.com designates 208.91.0.189 as permitted sender) client-ip=208.91.0.189; Authentication-Results: mx.google.com; spf=pass (google.com: domain of namit@vmware.com designates 208.91.0.189 as permitted sender) smtp.mailfrom=namit@vmware.com; dmarc=pass (p=QUARANTINE sp=QUARANTINE dis=NONE) header.from=vmware.com Received: from sc9-mailhost2.vmware.com (10.113.161.72) by EX13-EDG-OU-001.vmware.com (10.113.208.155) with Microsoft SMTP Server id 15.0.1156.6; Wed, 27 Mar 2019 10:09:16 -0700 Received: from namit-esx4.eng.vmware.com (sc2-hs2-general-dhcp-219-51.eng.vmware.com [10.172.219.51]) by sc9-mailhost2.vmware.com (Postfix) with ESMTP id BFEA7B2129; Wed, 27 Mar 2019 13:09:29 -0400 (EDT) From: Nadav Amit To: Greg Kroah-Hartman , Arnd Bergmann CC: "Michael S. Tsirkin" , Jason Wang , , , , "VMware, Inc." , Julien Freche , Nadav Amit , Nadav Amit Subject: [PATCH v2 2/4] vmw_balloon: compaction support Date: Thu, 28 Mar 2019 01:07:16 +0000 Message-ID: <20190328010718.2248-3-namit@vmware.com> X-Mailer: git-send-email 2.19.1 In-Reply-To: <20190328010718.2248-1-namit@vmware.com> References: <20190328010718.2248-1-namit@vmware.com> MIME-Version: 1.0 Received-SPF: None (EX13-EDG-OU-001.vmware.com: namit@vmware.com does not designate permitted sender hosts) X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: X-Virus-Scanned: ClamAV using ClamSMTP Add support for compaction for VMware balloon. Since unlike the virtio balloon, we also support huge-pages, which are not going through compaction, we keep these pages in vmballoon and handle this list separately. We use the same lock to protect both lists, as this lock is not supposed to be contended. Doing so also eliminates the need for the page_size lists. We update the accounting as needed to reflect inflation, deflation and migration to be reflected in vmstat. Since VMware balloon now provides statistics for inflation, deflation and migration in vmstat, select MEMORY_BALLOON in Kconfig. Reviewed-by: Xavier Deguillard Signed-off-by: Nadav Amit --- drivers/misc/Kconfig | 1 + drivers/misc/vmw_balloon.c | 301 ++++++++++++++++++++++++++++++++----- 2 files changed, 264 insertions(+), 38 deletions(-) diff --git a/drivers/misc/Kconfig b/drivers/misc/Kconfig index 42ab8ec92a04..427cf10579b4 100644 --- a/drivers/misc/Kconfig +++ b/drivers/misc/Kconfig @@ -420,6 +420,7 @@ config SPEAR13XX_PCIE_GADGET config VMWARE_BALLOON tristate "VMware Balloon Driver" depends on VMWARE_VMCI && X86 && HYPERVISOR_GUEST + select MEMORY_BALLOON help This is VMware physical memory management driver which acts like a "balloon" that can be inflated to reclaim physical pages diff --git a/drivers/misc/vmw_balloon.c b/drivers/misc/vmw_balloon.c index ad807d5a3141..2136f6ad97d3 100644 --- a/drivers/misc/vmw_balloon.c +++ b/drivers/misc/vmw_balloon.c @@ -28,6 +28,8 @@ #include #include #include +#include +#include #include #include #include @@ -38,25 +40,11 @@ MODULE_ALIAS("dmi:*:svnVMware*:*"); MODULE_ALIAS("vmware_vmmemctl"); MODULE_LICENSE("GPL"); -/* - * Use __GFP_HIGHMEM to allow pages from HIGHMEM zone. We don't allow wait - * (__GFP_RECLAIM) for huge page allocations. Use __GFP_NOWARN, to suppress page - * allocation failure warnings. Disallow access to emergency low-memory pools. - */ -#define VMW_HUGE_PAGE_ALLOC_FLAGS (__GFP_HIGHMEM|__GFP_NOWARN| \ - __GFP_NOMEMALLOC) - -/* - * Use __GFP_HIGHMEM to allow pages from HIGHMEM zone. We allow lightweight - * reclamation (__GFP_NORETRY). Use __GFP_NOWARN, to suppress page allocation - * failure warnings. Disallow access to emergency low-memory pools. - */ -#define VMW_PAGE_ALLOC_FLAGS (__GFP_HIGHMEM|__GFP_NOWARN| \ - __GFP_NOMEMALLOC|__GFP_NORETRY) - -/* Maximum number of refused pages we accumulate during inflation cycle */ #define VMW_BALLOON_MAX_REFUSED 16 +/* Magic number for the balloon mount-point */ +#define BALLOON_VMW_MAGIC 0x0ba11007 + /* * Hypervisor communication port definitions. */ @@ -247,11 +235,6 @@ struct vmballoon_ctl { enum vmballoon_op op; }; -struct vmballoon_page_size { - /* list of reserved physical pages */ - struct list_head pages; -}; - /** * struct vmballoon_batch_entry - a batch entry for lock or unlock. * @@ -266,8 +249,6 @@ struct vmballoon_batch_entry { } __packed; struct vmballoon { - struct vmballoon_page_size page_sizes[VMW_BALLOON_NUM_PAGE_SIZES]; - /** * @max_page_size: maximum supported page size for ballooning. * @@ -348,8 +329,20 @@ struct vmballoon { struct dentry *dbg_entry; #endif + /** + * @b_dev_info: balloon device information descriptor. + */ + struct balloon_dev_info b_dev_info; + struct delayed_work dwork; + /** + * @huge_pages - list of the inflated 2MB pages. + * + * Protected by @b_dev_info.pages_lock . + */ + struct list_head huge_pages; + /** * @vmci_doorbell. * @@ -643,10 +636,10 @@ static int vmballoon_alloc_page_list(struct vmballoon *b, for (i = 0; i < req_n_pages; i++) { if (ctl->page_size == VMW_BALLOON_2M_PAGE) - page = alloc_pages(VMW_HUGE_PAGE_ALLOC_FLAGS, - VMW_BALLOON_2M_ORDER); + page = alloc_pages(__GFP_HIGHMEM|__GFP_NOWARN| + __GFP_NOMEMALLOC, VMW_BALLOON_2M_ORDER); else - page = alloc_page(VMW_PAGE_ALLOC_FLAGS); + page = balloon_page_alloc(); /* Update statistics */ vmballoon_stats_page_inc(b, VMW_BALLOON_PAGE_STAT_ALLOC, @@ -961,9 +954,22 @@ static void vmballoon_enqueue_page_list(struct vmballoon *b, unsigned int *n_pages, enum vmballoon_page_size_type page_size) { - struct vmballoon_page_size *page_size_info = &b->page_sizes[page_size]; + unsigned long flags; + + if (page_size == VMW_BALLOON_4K_PAGE) { + balloon_page_list_enqueue(&b->b_dev_info, pages); + } else { + /* + * Keep the huge pages in a local list which is not available + * for the balloon compaction mechanism. + */ + spin_lock_irqsave(&b->b_dev_info.pages_lock, flags); + list_splice_init(pages, &b->huge_pages); + __count_vm_events(BALLOON_INFLATE, *n_pages * + vmballoon_page_in_frames(VMW_BALLOON_2M_PAGE)); + spin_unlock_irqrestore(&b->b_dev_info.pages_lock, flags); + } - list_splice_init(pages, &page_size_info->pages); *n_pages = 0; } @@ -986,15 +992,28 @@ static void vmballoon_dequeue_page_list(struct vmballoon *b, enum vmballoon_page_size_type page_size, unsigned int n_req_pages) { - struct vmballoon_page_size *page_size_info = &b->page_sizes[page_size]; struct page *page, *tmp; unsigned int i = 0; + unsigned long flags; - list_for_each_entry_safe(page, tmp, &page_size_info->pages, lru) { + /* In the case of 4k pages, use the compaction infrastructure */ + if (page_size == VMW_BALLOON_4K_PAGE) { + *n_pages = balloon_page_list_dequeue(&b->b_dev_info, pages, + n_req_pages); + return; + } + + /* 2MB pages */ + spin_lock_irqsave(&b->b_dev_info.pages_lock, flags); + list_for_each_entry_safe(page, tmp, &b->huge_pages, lru) { list_move(&page->lru, pages); if (++i == n_req_pages) break; } + + __count_vm_events(BALLOON_DEFLATE, + i * vmballoon_page_in_frames(VMW_BALLOON_2M_PAGE)); + spin_unlock_irqrestore(&b->b_dev_info.pages_lock, flags); *n_pages = i; } @@ -1552,9 +1571,204 @@ static inline void vmballoon_debugfs_exit(struct vmballoon *b) #endif /* CONFIG_DEBUG_FS */ + +#ifdef CONFIG_BALLOON_COMPACTION + +static struct dentry *vmballoon_mount(struct file_system_type *fs_type, + int flags, const char *dev_name, + void *data) +{ + static const struct dentry_operations ops = { + .d_dname = simple_dname, + }; + + return mount_pseudo(fs_type, "balloon-vmware:", NULL, &ops, + BALLOON_VMW_MAGIC); +} + +static struct file_system_type vmballoon_fs = { + .name = "balloon-vmware", + .mount = vmballoon_mount, + .kill_sb = kill_anon_super, +}; + +static struct vfsmount *vmballoon_mnt; + +/** + * vmballoon_migratepage() - migrates a balloon page. + * @b_dev_info: balloon device information descriptor. + * @newpage: the page to which @page should be migrated. + * @page: a ballooned page that should be migrated. + * @mode: migration mode, ignored. + * + * This function is really open-coded, but that is according to the interface + * that balloon_compaction provides. + * + * Return: zero on success, -EAGAIN when migration cannot be performed + * momentarily, and -EBUSY if migration failed and should be retried + * with that specific page. + */ +static int vmballoon_migratepage(struct balloon_dev_info *b_dev_info, + struct page *newpage, struct page *page, + enum migrate_mode mode) +{ + unsigned long status, flags; + struct vmballoon *b; + int ret; + + b = container_of(b_dev_info, struct vmballoon, b_dev_info); + + /* + * If the semaphore is taken, there is ongoing configuration change + * (i.e., balloon reset), so try again. + */ + if (!down_read_trylock(&b->conf_sem)) + return -EAGAIN; + + spin_lock(&b->comm_lock); + /* + * We must start by deflating and not inflating, as otherwise the + * hypervisor may tell us that it has enough memory and the new page is + * not needed. Since the old page is isolated, we cannot use the list + * interface to unlock it, as the LRU field is used for isolation. + * Instead, we use the native interface directly. + */ + vmballoon_add_page(b, 0, page); + status = vmballoon_lock_op(b, 1, VMW_BALLOON_4K_PAGE, + VMW_BALLOON_DEFLATE); + + if (status == VMW_BALLOON_SUCCESS) + status = vmballoon_status_page(b, 0, &page); + + /* + * If a failure happened, let the migration mechanism know that it + * should not retry. + */ + if (status != VMW_BALLOON_SUCCESS) { + spin_unlock(&b->comm_lock); + ret = -EBUSY; + goto out_unlock; + } + + /* + * The page is isolated, so it is safe to delete it without holding + * @pages_lock . We keep holding @comm_lock since we will need it in a + * second. + */ + balloon_page_delete(page); + + put_page(page); + + /* Inflate */ + vmballoon_add_page(b, 0, newpage); + status = vmballoon_lock_op(b, 1, VMW_BALLOON_4K_PAGE, + VMW_BALLOON_INFLATE); + + if (status == VMW_BALLOON_SUCCESS) + status = vmballoon_status_page(b, 0, &newpage); + + spin_unlock(&b->comm_lock); + + if (status != VMW_BALLOON_SUCCESS) { + /* + * A failure happened. While we can deflate the page we just + * inflated, this deflation can also encounter an error. Instead + * we will decrease the size of the balloon to reflect the + * change and report failure. + */ + atomic64_dec(&b->size); + ret = -EBUSY; + } else { + /* + * Success. Take a reference for the page, and we will add it to + * the list after acquiring the lock. + */ + get_page(newpage); + ret = MIGRATEPAGE_SUCCESS; + } + + /* Update the balloon list under the @pages_lock */ + spin_lock_irqsave(&b->b_dev_info.pages_lock, flags); + + /* + * On inflation success, we already took a reference for the @newpage. + * If we succeed just insert it to the list and update the statistics + * under the lock. + */ + if (ret == MIGRATEPAGE_SUCCESS) { + balloon_page_insert(&b->b_dev_info, newpage); + __count_vm_event(BALLOON_MIGRATE); + } + + /* + * We deflated successfully, so regardless to the inflation success, we + * need to reduce the number of isolated_pages. + */ + b->b_dev_info.isolated_pages--; + spin_unlock_irqrestore(&b->b_dev_info.pages_lock, flags); + +out_unlock: + up_read(&b->conf_sem); + return ret; +} + +/** + * vmballoon_compaction_deinit() - removes compaction related data. + * + * @b: pointer to the balloon. + */ +static void vmballoon_compaction_deinit(struct vmballoon *b) +{ + if (!IS_ERR(b->b_dev_info.inode)) + iput(b->b_dev_info.inode); + + b->b_dev_info.inode = NULL; + kern_unmount(vmballoon_mnt); + vmballoon_mnt = NULL; +} + +/** + * vmballoon_compaction_init() - initialized compaction for the balloon. + * + * @b: pointer to the balloon. + * + * If during the initialization a failure occurred, this function does not + * perform cleanup. The caller must call vmballoon_compaction_deinit() in this + * case. + * + * Return: zero on success or error code on failure. + */ +static __init int vmballoon_compaction_init(struct vmballoon *b) +{ + vmballoon_mnt = kern_mount(&vmballoon_fs); + if (IS_ERR(vmballoon_mnt)) + return PTR_ERR(vmballoon_mnt); + + b->b_dev_info.migratepage = vmballoon_migratepage; + b->b_dev_info.inode = alloc_anon_inode(vmballoon_mnt->mnt_sb); + + if (IS_ERR(b->b_dev_info.inode)) + return PTR_ERR(b->b_dev_info.inode); + + b->b_dev_info.inode->i_mapping->a_ops = &balloon_aops; + return 0; +} + +#else /* CONFIG_BALLOON_COMPACTION */ + +static void vmballoon_compaction_deinit(struct vmballoon *b) +{ +} + +static int vmballoon_compaction_init(struct vmballoon *b) +{ + return 0; +} + +#endif /* CONFIG_BALLOON_COMPACTION */ + static int __init vmballoon_init(void) { - enum vmballoon_page_size_type page_size; int error; /* @@ -1564,17 +1778,22 @@ static int __init vmballoon_init(void) if (x86_hyper_type != X86_HYPER_VMWARE) return -ENODEV; - for (page_size = VMW_BALLOON_4K_PAGE; - page_size <= VMW_BALLOON_LAST_SIZE; page_size++) - INIT_LIST_HEAD(&balloon.page_sizes[page_size].pages); - - INIT_DELAYED_WORK(&balloon.dwork, vmballoon_work); error = vmballoon_debugfs_init(&balloon); if (error) - return error; + goto fail; + /* + * Initialization of compaction must be done after the call to + * balloon_devinfo_init() . + */ + balloon_devinfo_init(&balloon.b_dev_info); + error = vmballoon_compaction_init(&balloon); + if (error) + goto fail; + + INIT_LIST_HEAD(&balloon.huge_pages); spin_lock_init(&balloon.comm_lock); init_rwsem(&balloon.conf_sem); balloon.vmci_doorbell = VMCI_INVALID_HANDLE; @@ -1585,6 +1804,9 @@ static int __init vmballoon_init(void) queue_delayed_work(system_freezable_wq, &balloon.dwork, 0); return 0; +fail: + vmballoon_compaction_deinit(&balloon); + return error; } /* @@ -1609,5 +1831,8 @@ static void __exit vmballoon_exit(void) */ vmballoon_send_start(&balloon, 0); vmballoon_pop(&balloon); + + /* Only once we popped the balloon, compaction can be deinit */ + vmballoon_compaction_deinit(&balloon); } module_exit(vmballoon_exit); From patchwork Thu Mar 28 01:07:17 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Nadav Amit X-Patchwork-Id: 10873803 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 03EF7922 for ; Wed, 27 Mar 2019 17:09:43 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id DA44E28646 for ; Wed, 27 Mar 2019 17:09:42 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id CAB6328936; Wed, 27 Mar 2019 17:09:42 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-1.0 required=2.0 tests=BAYES_00,DATE_IN_FUTURE_06_12, MAILING_LIST_MULTI,RCVD_IN_DNSWL_NONE autolearn=ham version=3.3.1 Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 0C19D28646 for ; Wed, 27 Mar 2019 17:09:41 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 7D2C16B000C; Wed, 27 Mar 2019 13:09:33 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 70DCF6B000E; Wed, 27 Mar 2019 13:09:33 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 4EA806B000D; Wed, 27 Mar 2019 13:09:33 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from mail-pg1-f200.google.com (mail-pg1-f200.google.com [209.85.215.200]) by kanga.kvack.org (Postfix) with ESMTP id EB8616B000E for ; Wed, 27 Mar 2019 13:09:32 -0400 (EDT) Received: by mail-pg1-f200.google.com with SMTP id 73so14456630pga.18 for ; Wed, 27 Mar 2019 10:09:32 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-original-authentication-results:x-gm-message-state:from:to:cc :subject:date:message-id:in-reply-to:references:mime-version :content-transfer-encoding; bh=FdRMFnk0K2CXCQt2IydJOxzkV2BiPw7I3vvs+rcgIiY=; b=AWZr2Me3a10UVNDmZSyR3dWQW2tZ8vrWK+unb3fhivYiBsqCYWiTKaAJN8hlVn/tqH OMBPQUknxvoSvkgKFplV7jd90S0WVDMRWaAign1L476tTVGZ4UYMpfbLPmD5Ukvtgs0a 0wi7me5LlWIhB0dZjXspWQEw99diS1e2uRjpmhKe0xdOXjInCjyDEgExc8ZbogrM3Rhq g7LfmYR1rHIZFo0E+WFivh36pOGhgsyckIKTyXFK3CHIJcEHRlgg6EaNPqKI44jGFc4i iRrorSBhsl50SuqgWO9q9oBiDOvf/JnopJizVQ+9TseaRWFOYLkUFZwSWy3rI09kfbig MHwg== X-Original-Authentication-Results: mx.google.com; spf=pass (google.com: domain of namit@vmware.com designates 208.91.0.189 as permitted sender) smtp.mailfrom=namit@vmware.com; dmarc=pass (p=QUARANTINE sp=QUARANTINE dis=NONE) header.from=vmware.com X-Gm-Message-State: APjAAAWpyBR6E9wlEqUBbFjXJKobxJv6UQD0VMVVzRlOLh0yoOf8Plp5 CMN4RQho3JzmBYsOGO1bf1g72N31sqR65lFbV6S92fi3t8aWiFJqVMfgAVIm0WbzEpO+h5YgYzo no0hfHkyNJtClhh2YT9Hurmb0u1QeI5nrweGTqIgb+kXlnLEGSriW7MOWZEjRw1KPIg== X-Received: by 2002:a63:1d20:: with SMTP id d32mr35678076pgd.49.1553706572594; Wed, 27 Mar 2019 10:09:32 -0700 (PDT) X-Google-Smtp-Source: APXvYqyNc1+PP8umJxgu9jnhgm7+bAh7dMQ1ipSWolarbQGghl2lfY9f3Mv6KHMwP10vXou02Z5y X-Received: by 2002:a63:1d20:: with SMTP id d32mr35677932pgd.49.1553706571042; Wed, 27 Mar 2019 10:09:31 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1553706571; cv=none; d=google.com; s=arc-20160816; b=r0gKt36ZVPYFIo6gTeWzwCQMcnCCpSMlMSekdjtvauHWvYmU58ySNH4Mf9vjNI7BYR 6WPolNJHo5Xhf+rZjz7E/zpBNi4kfr+JHuPEUwdBvE60inRcbizyzzemGU3KQ8VHyPYJ PzJKIbF8I6o0KT/ytC/E4sNI0+APru4fPhMTE81SJPBukSD/GIwHxZ7RJWx4IHbz1XU3 a3PfJEVslpA/Ex5MzHQNW8gbE0Ghmf8ESv0vcn3i6Uamw7ykmOoQRpew0tn9CBzIQjEK ZNFqab1ekyzUGypF0DIHVit4DqxPZBYPGnnpM/A+tuebto3P7EzKwnpF+0GA/TfwZAlI y1+w== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from; bh=FdRMFnk0K2CXCQt2IydJOxzkV2BiPw7I3vvs+rcgIiY=; b=lj0gMgNJw/1ycOeEX/vdhjWaiw5szC5HxFuisIHWxASVUDRFDVM7OLqslC2ZpumIPp n5cU0SkGGABTM5pLrAUi1uR4f/Ivn+MAWGCIRS3SpGi+xxNgvLB21Ij3lK08GvEgLNDe HCDBR7HOHS8Iwtrh5w2dwxkIUAKZRLmxlmTHCq5O4gjwtDQ3D+2GYrUIePlt/RuglJ8L jm6Tub7XeGnnJaNea3511FqzFV3EmnS9k9VoV5XhsIU3CDJQZCY6w3WPDxRwQXIU3PFi YxcSSBmfXLhQEC7mTSRrmmxQOTW+uZcvqdi6HYcQugnFQEq0CPrcmeUEacCnoC8Qs6HQ zUjA== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of namit@vmware.com designates 208.91.0.189 as permitted sender) smtp.mailfrom=namit@vmware.com; dmarc=pass (p=QUARANTINE sp=QUARANTINE dis=NONE) header.from=vmware.com Received: from EX13-EDG-OU-001.vmware.com (ex13-edg-ou-001.vmware.com. [208.91.0.189]) by mx.google.com with ESMTPS id n22si19678465plp.296.2019.03.27.10.09.30 for (version=TLS1_2 cipher=ECDHE-RSA-AES128-SHA bits=128/128); Wed, 27 Mar 2019 10:09:31 -0700 (PDT) Received-SPF: pass (google.com: domain of namit@vmware.com designates 208.91.0.189 as permitted sender) client-ip=208.91.0.189; Authentication-Results: mx.google.com; spf=pass (google.com: domain of namit@vmware.com designates 208.91.0.189 as permitted sender) smtp.mailfrom=namit@vmware.com; dmarc=pass (p=QUARANTINE sp=QUARANTINE dis=NONE) header.from=vmware.com Received: from sc9-mailhost2.vmware.com (10.113.161.72) by EX13-EDG-OU-001.vmware.com (10.113.208.155) with Microsoft SMTP Server id 15.0.1156.6; Wed, 27 Mar 2019 10:09:16 -0700 Received: from namit-esx4.eng.vmware.com (sc2-hs2-general-dhcp-219-51.eng.vmware.com [10.172.219.51]) by sc9-mailhost2.vmware.com (Postfix) with ESMTP id D5241B2125; Wed, 27 Mar 2019 13:09:29 -0400 (EDT) From: Nadav Amit To: Greg Kroah-Hartman , Arnd Bergmann CC: "Michael S. Tsirkin" , Jason Wang , , , , "VMware, Inc." , Julien Freche , Nadav Amit , Nadav Amit Subject: [PATCH v2 3/4] vmw_balloon: add memory shrinker Date: Thu, 28 Mar 2019 01:07:17 +0000 Message-ID: <20190328010718.2248-4-namit@vmware.com> X-Mailer: git-send-email 2.19.1 In-Reply-To: <20190328010718.2248-1-namit@vmware.com> References: <20190328010718.2248-1-namit@vmware.com> MIME-Version: 1.0 Received-SPF: None (EX13-EDG-OU-001.vmware.com: namit@vmware.com does not designate permitted sender hosts) X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: X-Virus-Scanned: ClamAV using ClamSMTP Add a shrinker to the VMware balloon to prevent out-of-memory events. We reuse the deflate logic for this matter. Deadlocks should not happen, as no memory allocation is performed while the locks of the communication (batch/page) and page-list are taken. In the unlikely event in which the configuration semaphore is taken for write we bail out and fail gracefully (causing processes to be killed). Once the shrinker is called, inflation is postponed for few seconds. The timeout is updated without any lock, but this should not cause any races, as it is written and read atomically. This feature is disabled by default, since it might cause performance degradation. Reviewed-by: Xavier Deguillard Signed-off-by: Nadav Amit --- drivers/misc/vmw_balloon.c | 133 ++++++++++++++++++++++++++++++++++++- 1 file changed, 131 insertions(+), 2 deletions(-) diff --git a/drivers/misc/vmw_balloon.c b/drivers/misc/vmw_balloon.c index 2136f6ad97d3..59d3c0202dcc 100644 --- a/drivers/misc/vmw_balloon.c +++ b/drivers/misc/vmw_balloon.c @@ -40,6 +40,15 @@ MODULE_ALIAS("dmi:*:svnVMware*:*"); MODULE_ALIAS("vmware_vmmemctl"); MODULE_LICENSE("GPL"); +bool __read_mostly vmwballoon_shrinker_enable; +module_param(vmwballoon_shrinker_enable, bool, 0444); +MODULE_PARM_DESC(vmwballoon_shrinker_enable, + "Enable non-cooperative out-of-memory protection. Disabled by default as it may degrade performance."); + +/* Delay in seconds after shrink before inflation. */ +#define VMBALLOON_SHRINK_DELAY (5) + +/* Maximum number of refused pages we accumulate during inflation cycle */ #define VMW_BALLOON_MAX_REFUSED 16 /* Magic number for the balloon mount-point */ @@ -217,12 +226,13 @@ enum vmballoon_stat_general { VMW_BALLOON_STAT_TIMER, VMW_BALLOON_STAT_DOORBELL, VMW_BALLOON_STAT_RESET, - VMW_BALLOON_STAT_LAST = VMW_BALLOON_STAT_RESET + VMW_BALLOON_STAT_SHRINK, + VMW_BALLOON_STAT_SHRINK_FREE, + VMW_BALLOON_STAT_LAST = VMW_BALLOON_STAT_SHRINK_FREE }; #define VMW_BALLOON_STAT_NUM (VMW_BALLOON_STAT_LAST + 1) - static DEFINE_STATIC_KEY_TRUE(vmw_balloon_batching); static DEFINE_STATIC_KEY_FALSE(balloon_stat_enabled); @@ -321,6 +331,15 @@ struct vmballoon { */ struct page *page; + /** + * @shrink_timeout: timeout until the next inflation. + * + * After an shrink event, indicates the time in jiffies after which + * inflation is allowed again. Can be written concurrently with reads, + * so must use READ_ONCE/WRITE_ONCE when accessing. + */ + unsigned long shrink_timeout; + /* statistics */ struct vmballoon_stats *stats; @@ -361,6 +380,20 @@ struct vmballoon { * Lock ordering: @conf_sem -> @comm_lock . */ spinlock_t comm_lock; + + /** + * @shrinker: shrinker interface that is used to avoid over-inflation. + */ + struct shrinker shrinker; + + /** + * @shrinker_registered: whether the shrinker was registered. + * + * The shrinker interface does not handle gracefully the removal of + * shrinker that was not registered before. This indication allows to + * simplify the unregistration process. + */ + bool shrinker_registered; }; static struct vmballoon balloon; @@ -935,6 +968,10 @@ static int64_t vmballoon_change(struct vmballoon *b) size - target < vmballoon_page_in_frames(VMW_BALLOON_2M_PAGE)) return 0; + /* If an out-of-memory recently occurred, inflation is disallowed. */ + if (target > size && time_before(jiffies, READ_ONCE(b->shrink_timeout))) + return 0; + return target - size; } @@ -1430,6 +1467,90 @@ static void vmballoon_work(struct work_struct *work) } +/** + * vmballoon_shrinker_scan() - deflate the balloon due to memory pressure. + * @shrinker: pointer to the balloon shrinker. + * @sc: page reclaim information. + * + * Returns: number of pages that were freed during deflation. + */ +static unsigned long vmballoon_shrinker_scan(struct shrinker *shrinker, + struct shrink_control *sc) +{ + struct vmballoon *b = &balloon; + unsigned long deflated_frames; + + pr_debug("%s - size: %llu", __func__, atomic64_read(&b->size)); + + vmballoon_stats_gen_inc(b, VMW_BALLOON_STAT_SHRINK); + + /* + * If the lock is also contended for read, we cannot easily reclaim and + * we bail out. + */ + if (!down_read_trylock(&b->conf_sem)) + return 0; + + deflated_frames = vmballoon_deflate(b, sc->nr_to_scan, true); + + vmballoon_stats_gen_add(b, VMW_BALLOON_STAT_SHRINK_FREE, + deflated_frames); + + /* + * Delay future inflation for some time to mitigate the situations in + * which balloon continuously grows and shrinks. Use WRITE_ONCE() since + * the access is asynchronous. + */ + WRITE_ONCE(b->shrink_timeout, jiffies + HZ * VMBALLOON_SHRINK_DELAY); + + up_read(&b->conf_sem); + + return deflated_frames; +} + +/** + * vmballoon_shrinker_count() - return the number of ballooned pages. + * @shrinker: pointer to the balloon shrinker. + * @sc: page reclaim information. + * + * Returns: number of 4k pages that are allocated for the balloon and can + * therefore be reclaimed under pressure. + */ +static unsigned long vmballoon_shrinker_count(struct shrinker *shrinker, + struct shrink_control *sc) +{ + struct vmballoon *b = &balloon; + + return atomic64_read(&b->size); +} + +static void vmballoon_unregister_shrinker(struct vmballoon *b) +{ + if (b->shrinker_registered) + unregister_shrinker(&b->shrinker); + b->shrinker_registered = false; +} + +static int vmballoon_register_shrinker(struct vmballoon *b) +{ + int r; + + /* Do nothing if the shrinker is not enabled */ + if (!vmwballoon_shrinker_enable) + return 0; + + b->shrinker.scan_objects = vmballoon_shrinker_scan; + b->shrinker.count_objects = vmballoon_shrinker_count; + b->shrinker.seeks = DEFAULT_SEEKS; + + r = register_shrinker(&b->shrinker); + + if (r == 0) + b->shrinker_registered = true; + + return r; +} + /* * DEBUGFS Interface */ @@ -1447,6 +1568,8 @@ static const char * const vmballoon_stat_names[] = { [VMW_BALLOON_STAT_TIMER] = "timer", [VMW_BALLOON_STAT_DOORBELL] = "doorbell", [VMW_BALLOON_STAT_RESET] = "reset", + [VMW_BALLOON_STAT_SHRINK] = "shrink", + [VMW_BALLOON_STAT_SHRINK_FREE] = "shrinkFree" }; static int vmballoon_enable_stats(struct vmballoon *b) @@ -1780,6 +1903,10 @@ static int __init vmballoon_init(void) INIT_DELAYED_WORK(&balloon.dwork, vmballoon_work); + error = vmballoon_register_shrinker(&balloon); + if (error) + goto fail; + error = vmballoon_debugfs_init(&balloon); if (error) goto fail; @@ -1805,6 +1932,7 @@ static int __init vmballoon_init(void) return 0; fail: + vmballoon_unregister_shrinker(&balloon); vmballoon_compaction_deinit(&balloon); return error; } @@ -1819,6 +1947,7 @@ late_initcall(vmballoon_init); static void __exit vmballoon_exit(void) { + vmballoon_unregister_shrinker(&balloon); vmballoon_vmci_cleanup(&balloon); cancel_delayed_work_sync(&balloon.dwork); From patchwork Thu Mar 28 01:07:18 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Nadav Amit X-Patchwork-Id: 10873799 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 9A600186D for ; Wed, 27 Mar 2019 17:09:36 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 7E14628646 for ; Wed, 27 Mar 2019 17:09:36 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 72871287E2; Wed, 27 Mar 2019 17:09:36 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-1.0 required=2.0 tests=BAYES_00,DATE_IN_FUTURE_06_12, MAILING_LIST_MULTI,RCVD_IN_DNSWL_NONE autolearn=ham version=3.3.1 Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id DA1A428646 for ; Wed, 27 Mar 2019 17:09:35 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id DC5E66B0007; Wed, 27 Mar 2019 13:09:32 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id D6D886B0008; Wed, 27 Mar 2019 13:09:32 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id BE74A6B000D; Wed, 27 Mar 2019 13:09:32 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from mail-pf1-f200.google.com (mail-pf1-f200.google.com [209.85.210.200]) by kanga.kvack.org (Postfix) with ESMTP id 690DA6B0008 for ; Wed, 27 Mar 2019 13:09:32 -0400 (EDT) Received: by mail-pf1-f200.google.com with SMTP id u8so14550496pfm.6 for ; Wed, 27 Mar 2019 10:09:32 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-original-authentication-results:x-gm-message-state:from:to:cc :subject:date:message-id:in-reply-to:references:mime-version :content-transfer-encoding; bh=BHbdRneIMKhLzM9N0cVkceZICuGt+ehqBdRC6oi9jeo=; b=fLzFB7UQBy0qxpj6ys5te72kxtBp8WiLNHlihLbrCW5+yGOIiV/eRsd2zlkoAd4YJE s1FoQQps2fFTCbg2VAB2XKXMOt6FpAb2UfNSgE9pd2w4/Z41cAhsSaWZat5Ww5f4AhcP GpJj1126xoodhkh3OugBfgx1ZyK9RuFOg+cXQLTbCJFkKPHkvuIn4oDlitktgGdl2Prj CIdrEnRxdWsQfqSz4Ls9BeptRL7BE3vkdy/JuLpokmSp/aR50DkcOe1pKLoRFAy8YvS/ abMVniqZojJ18S7pDZsDINpiWabLjPHOKiKIjx/jsL4FtxR3rM2w6hhCUlAXcWqhRMqg pB3g== X-Original-Authentication-Results: mx.google.com; spf=pass (google.com: domain of namit@vmware.com designates 208.91.0.190 as permitted sender) smtp.mailfrom=namit@vmware.com; dmarc=pass (p=QUARANTINE sp=QUARANTINE dis=NONE) header.from=vmware.com X-Gm-Message-State: APjAAAWwR4jvbvE93BX9TXrteoVa188VQBnAubsDNgyUnBsqZe0I009u OiRBVDv9+VnXqKqOaz9TEIjSsSFg4fXfoV9F8WCkSxfqYDogscMu6FjcJxo7EImX+46liIeoo4D 0TiHekBMs3o0JrFON6OiVVjLqU3kLoABQHOQLZyiXpkWfx35ahp4SRE0pkNTzRDvq2Q== X-Received: by 2002:a63:cc0e:: with SMTP id x14mr35975640pgf.159.1553706572057; Wed, 27 Mar 2019 10:09:32 -0700 (PDT) X-Google-Smtp-Source: APXvYqyHM5lRxP+lFX88kdUmzGjSHlFDCWozDqIvjO9XMIyJnIr1h0j8J6I0Ylgo9EPjmvEj0aKn X-Received: by 2002:a63:cc0e:: with SMTP id x14mr35975510pgf.159.1553706570626; Wed, 27 Mar 2019 10:09:30 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1553706570; cv=none; d=google.com; s=arc-20160816; b=v7i5n/+NrY6o3s0kD3VZImJOYOIaKfBQ51fmsJAjKzioGgf5BilNeCvewsHdPFNFGJ 0TiFEB/IH2lrGoin49013Rilv0PUUPaO6XUot0g4Ba5X0HRznpgIpwvQ+JmGwMyrgNS6 cPAIfVnVcBgpwQJA8wGn7sn7esM1Lqyao97e+bnF28p29UuKgJaZ0pIo3j/kZyVfXp+R Nbv5nJG8Fgg8CQca26JQB1TupAyEqjEgiq3jDHy4Zef0kOdaLHhnYP44NjCv5v6dnfWn 4X6G2MF/lJekVOQe4wKDTEf+8HQXlFjE6/9zLAM2CQYKPD+udMp3/FfW6QBp1uRcPmO6 QqXA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from; bh=BHbdRneIMKhLzM9N0cVkceZICuGt+ehqBdRC6oi9jeo=; b=wSQRRYXfzQyN90IlnjwTVizjdwSSNq81oYo9QvZkCP7nkJb+7tuaDnR1bzJ3vPy8oq etLxplF2AwzWGz2PkE5tWGHkz2Zj54OBMMbUKuVaHb20lcZs/+Io+kqrpZ1La6wr3Q1O AUyt8mt7nK+qf1y218C8fENr3Dh0BvFeHhJQ4BOwFKv89jGBURmRVWVQo5s39HdRzinF 0OOZF7csXenSntxXq2ZQTuywgshTexEhjqljXurHpOv4e9cF20TMgpnC9m38Vy7IN6os tZ1f5crk9G4C1rFamTp3sckNYck4c8bBYUoBUKVS7lhPV1jFkNX47KebpNJnbZFS9RAj j6vw== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of namit@vmware.com designates 208.91.0.190 as permitted sender) smtp.mailfrom=namit@vmware.com; dmarc=pass (p=QUARANTINE sp=QUARANTINE dis=NONE) header.from=vmware.com Received: from EX13-EDG-OU-002.vmware.com (ex13-edg-ou-002.vmware.com. [208.91.0.190]) by mx.google.com with ESMTPS id v31si20196984plg.2.2019.03.27.10.09.30 for (version=TLS1_2 cipher=ECDHE-RSA-AES128-SHA bits=128/128); Wed, 27 Mar 2019 10:09:30 -0700 (PDT) Received-SPF: pass (google.com: domain of namit@vmware.com designates 208.91.0.190 as permitted sender) client-ip=208.91.0.190; Authentication-Results: mx.google.com; spf=pass (google.com: domain of namit@vmware.com designates 208.91.0.190 as permitted sender) smtp.mailfrom=namit@vmware.com; dmarc=pass (p=QUARANTINE sp=QUARANTINE dis=NONE) header.from=vmware.com Received: from sc9-mailhost2.vmware.com (10.113.161.72) by EX13-EDG-OU-002.vmware.com (10.113.208.156) with Microsoft SMTP Server id 15.0.1156.6; Wed, 27 Mar 2019 10:09:16 -0700 Received: from namit-esx4.eng.vmware.com (sc2-hs2-general-dhcp-219-51.eng.vmware.com [10.172.219.51]) by sc9-mailhost2.vmware.com (Postfix) with ESMTP id EA139B212A; Wed, 27 Mar 2019 13:09:29 -0400 (EDT) From: Nadav Amit To: Greg Kroah-Hartman , Arnd Bergmann CC: "Michael S. Tsirkin" , Jason Wang , , , , "VMware, Inc." , Julien Freche , Nadav Amit , Nadav Amit Subject: [PATCH v2 4/4] vmw_balloon: split refused pages Date: Thu, 28 Mar 2019 01:07:18 +0000 Message-ID: <20190328010718.2248-5-namit@vmware.com> X-Mailer: git-send-email 2.19.1 In-Reply-To: <20190328010718.2248-1-namit@vmware.com> References: <20190328010718.2248-1-namit@vmware.com> MIME-Version: 1.0 Received-SPF: None (EX13-EDG-OU-002.vmware.com: namit@vmware.com does not designate permitted sender hosts) X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: X-Virus-Scanned: ClamAV using ClamSMTP The hypervisor might refuse to inflate pages. While the balloon driver handles this scenario correctly, a refusal to inflate a 2MB pages might cause the same page to be allocated again later just for its inflation to be refused again. This wastes energy and time. To avoid this situation, split the 2MB page to 4KB pages, and then try to inflate each one individually. Most of the 4KB pages out of the 2MB should be inflated successfully, and the balloon is likely to prevent the scenario of repeated refused inflation. Reviewed-by: Xavier Deguillard Signed-off-by: Nadav Amit --- drivers/misc/vmw_balloon.c | 63 +++++++++++++++++++++++++++++++------- 1 file changed, 52 insertions(+), 11 deletions(-) diff --git a/drivers/misc/vmw_balloon.c b/drivers/misc/vmw_balloon.c index 59d3c0202dcc..65ce8b41cd66 100644 --- a/drivers/misc/vmw_balloon.c +++ b/drivers/misc/vmw_balloon.c @@ -239,6 +239,7 @@ static DEFINE_STATIC_KEY_FALSE(balloon_stat_enabled); struct vmballoon_ctl { struct list_head pages; struct list_head refused_pages; + struct list_head prealloc_pages; unsigned int n_refused_pages; unsigned int n_pages; enum vmballoon_page_size_type page_size; @@ -668,15 +669,25 @@ static int vmballoon_alloc_page_list(struct vmballoon *b, unsigned int i; for (i = 0; i < req_n_pages; i++) { - if (ctl->page_size == VMW_BALLOON_2M_PAGE) - page = alloc_pages(__GFP_HIGHMEM|__GFP_NOWARN| + /* + * First check if we happen to have pages that were allocated + * before. This happens when 2MB page rejected during inflation + * by the hypervisor, and then split into 4KB pages. + */ + if (!list_empty(&ctl->prealloc_pages)) { + page = list_first_entry(&ctl->prealloc_pages, + struct page, lru); + list_del(&page->lru); + } else { + if (ctl->page_size == VMW_BALLOON_2M_PAGE) + page = alloc_pages(__GFP_HIGHMEM|__GFP_NOWARN| __GFP_NOMEMALLOC, VMW_BALLOON_2M_ORDER); - else - page = balloon_page_alloc(); + else + page = balloon_page_alloc(); - /* Update statistics */ - vmballoon_stats_page_inc(b, VMW_BALLOON_PAGE_STAT_ALLOC, - ctl->page_size); + vmballoon_stats_page_inc(b, VMW_BALLOON_PAGE_STAT_ALLOC, + ctl->page_size); + } if (page) { vmballoon_mark_page_offline(page, ctl->page_size); @@ -922,7 +933,8 @@ static void vmballoon_release_page_list(struct list_head *page_list, __free_pages(page, vmballoon_page_order(page_size)); } - *n_pages = 0; + if (n_pages) + *n_pages = 0; } @@ -1054,6 +1066,32 @@ static void vmballoon_dequeue_page_list(struct vmballoon *b, *n_pages = i; } +/** + * vmballoon_split_refused_pages() - Split the 2MB refused pages to 4k. + * + * If inflation of 2MB pages was denied by the hypervisor, it is likely to be + * due to one or few 4KB pages. These 2MB pages may keep being allocated and + * then being refused. To prevent this case, this function splits the refused + * pages into 4KB pages and adds them into @prealloc_pages list. + * + * @ctl: pointer for the %struct vmballoon_ctl, which defines the operation. + */ +static void vmballoon_split_refused_pages(struct vmballoon_ctl *ctl) +{ + struct page *page, *tmp; + unsigned int i, order; + + order = vmballoon_page_order(ctl->page_size); + + list_for_each_entry_safe(page, tmp, &ctl->refused_pages, lru) { + list_del(&page->lru); + split_page(page, order); + for (i = 0; i < (1 << order); i++) + list_add(&page[i].lru, &ctl->prealloc_pages); + } + ctl->n_refused_pages = 0; +} + /** * vmballoon_inflate() - Inflate the balloon towards its target size. * @@ -1065,6 +1103,7 @@ static void vmballoon_inflate(struct vmballoon *b) struct vmballoon_ctl ctl = { .pages = LIST_HEAD_INIT(ctl.pages), .refused_pages = LIST_HEAD_INIT(ctl.refused_pages), + .prealloc_pages = LIST_HEAD_INIT(ctl.prealloc_pages), .page_size = b->max_page_size, .op = VMW_BALLOON_INFLATE }; @@ -1112,10 +1151,10 @@ static void vmballoon_inflate(struct vmballoon *b) break; /* - * Ignore errors from locking as we now switch to 4k - * pages and we might get different errors. + * Split the refused pages to 4k. This will also empty + * the refused pages list. */ - vmballoon_release_refused_pages(b, &ctl); + vmballoon_split_refused_pages(&ctl); ctl.page_size--; } @@ -1129,6 +1168,8 @@ static void vmballoon_inflate(struct vmballoon *b) */ if (ctl.n_refused_pages != 0) vmballoon_release_refused_pages(b, &ctl); + + vmballoon_release_page_list(&ctl.prealloc_pages, NULL, ctl.page_size); } /**