From patchwork Fri May 8 12:31:44 2015 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Joerg Roedel X-Patchwork-Id: 6365211 Return-Path: X-Original-To: patchwork-kvm@patchwork.kernel.org Delivered-To: patchwork-parsemail@patchwork1.web.kernel.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.136]) by patchwork1.web.kernel.org (Postfix) with ESMTP id D56619F32E for ; Fri, 8 May 2015 12:32:16 +0000 (UTC) Received: from mail.kernel.org (localhost [127.0.0.1]) by mail.kernel.org (Postfix) with ESMTP id E36A8201FE for ; Fri, 8 May 2015 12:32:15 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id B2E8920251 for ; Fri, 8 May 2015 12:32:14 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1752869AbbEHMb5 (ORCPT ); Fri, 8 May 2015 08:31:57 -0400 Received: from 8bytes.org ([81.169.241.247]:52409 "EHLO theia.8bytes.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1752681AbbEHMbz (ORCPT ); Fri, 8 May 2015 08:31:55 -0400 Received: by theia.8bytes.org (Postfix, from userid 1000) id 4FC2F235; Fri, 8 May 2015 14:31:52 +0200 (CEST) DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple; d=8bytes.org; s=mail-1; t=1431088312; bh=NVSoLALAXBDdOYrVu0HaPs9k9Q8apoYaMFwoEDPnVwM=; h=From:To:Cc:Subject:Date:From; b=d6xvAwPftUP5DMiwUyHPWFiAvM87MgSsgSFMF8mFiTt+BU1/cxFz3DZyP4KfkUxEz KBzXEO/fxjuhlTk2oZcmbMxGKssSMn8bK5cgT6gT9Lkl6liSz767a4X4iWu8qmiFL1 es3Po26/9N6wPQZfbBW9XBqJrriKxv1FdORVDSeFwvmCL7A7zLkOnVBpDLNSlHE/oW fdI9VbWkZHFhihS3fHk/b4At5aoQt0rL4hisFQsBSINRusotDIAPac9AYIu2xCQ8uf A8UIvV6EEivDPq1y7wE/IDpBpNNCPEUJWIGJXK3ydfXoU6bW3FasLVac4tix7i+mpR 9mnxeQC/MEA6w== From: Joerg Roedel To: Gleb Natapov , Paolo Bonzini Cc: kvm@vger.kernel.org, linux-kernel@vger.kernel.org, Joerg Roedel Subject: [PATCH] kvm: irqchip: Break up high order allocations of kvm_irq_routing_table Date: Fri, 8 May 2015 14:31:44 +0200 Message-Id: <1431088304-11365-1-git-send-email-joro@8bytes.org> X-Mailer: git-send-email 1.9.1 Sender: kvm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org X-Spam-Status: No, score=-6.8 required=5.0 tests=BAYES_00,DKIM_SIGNED, RCVD_IN_DNSWL_HI,T_DKIM_INVALID,T_RP_MATCHES_RCVD,UNPARSEABLE_RELAY autolearn=unavailable version=3.3.1 X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on mail.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP From: Joerg Roedel The allocation size of the kvm_irq_routing_table depends on the number of irq routing entries because they are all allocated with one kzalloc call. When the irq routing table gets bigger this requires high order allocations which fail from time to time: qemu-kvm: page allocation failure: order:4, mode:0xd0 This patch fixes this issue by breaking up the allocation of the table and its entries into individual kzalloc calls. These could all be satisfied with order-0 allocations, which are less likely to fail. The downside of this change is the lower performance, because of more calls to kzalloc. But given how often kvm_set_irq_routing is called in the lifetime of a guest, it doesn't really matter much. Signed-off-by: Joerg Roedel --- virt/kvm/irqchip.c | 40 ++++++++++++++++++++++++++++++++-------- 1 file changed, 32 insertions(+), 8 deletions(-) diff --git a/virt/kvm/irqchip.c b/virt/kvm/irqchip.c index 1d56a90..b56168f 100644 --- a/virt/kvm/irqchip.c +++ b/virt/kvm/irqchip.c @@ -33,7 +33,6 @@ struct kvm_irq_routing_table { int chip[KVM_NR_IRQCHIPS][KVM_IRQCHIP_NUM_PINS]; - struct kvm_kernel_irq_routing_entry *rt_entries; u32 nr_rt_entries; /* * Array indexed by gsi. Each entry contains list of irq chips @@ -118,11 +117,31 @@ int kvm_set_irq(struct kvm *kvm, int irq_source_id, u32 irq, int level, return ret; } +static void free_irq_routing_table(struct kvm_irq_routing_table *rt) +{ + int i; + + if (!rt) + return; + + for (i = 0; i < rt->nr_rt_entries; ++i) { + struct kvm_kernel_irq_routing_entry *e; + struct hlist_node *n; + + hlist_for_each_entry_safe(e, n, &rt->map[i], link) { + hlist_del(&e->link); + kfree(e); + } + } + + kfree(rt); +} + void kvm_free_irq_routing(struct kvm *kvm) { /* Called only during vm destruction. Nobody can use the pointer at this stage */ - kfree(kvm->irq_routing); + free_irq_routing_table(kvm->irq_routing); } static int setup_routing_entry(struct kvm_irq_routing_table *rt, @@ -173,25 +192,29 @@ int kvm_set_irq_routing(struct kvm *kvm, nr_rt_entries += 1; - new = kzalloc(sizeof(*new) + (nr_rt_entries * sizeof(struct hlist_head)) - + (nr * sizeof(struct kvm_kernel_irq_routing_entry)), + new = kzalloc(sizeof(*new) + (nr_rt_entries * sizeof(struct hlist_head)), GFP_KERNEL); if (!new) return -ENOMEM; - new->rt_entries = (void *)&new->map[nr_rt_entries]; - new->nr_rt_entries = nr_rt_entries; for (i = 0; i < KVM_NR_IRQCHIPS; i++) for (j = 0; j < KVM_IRQCHIP_NUM_PINS; j++) new->chip[i][j] = -1; for (i = 0; i < nr; ++i) { + struct kvm_kernel_irq_routing_entry *e; + + r = -ENOMEM; + e = kzalloc(sizeof(*e), GFP_KERNEL); + if (!e) + goto out; + r = -EINVAL; if (ue->flags) goto out; - r = setup_routing_entry(new, &new->rt_entries[i], ue); + r = setup_routing_entry(new, e, ue); if (r) goto out; ++ue; @@ -209,6 +232,7 @@ int kvm_set_irq_routing(struct kvm *kvm, r = 0; out: - kfree(new); + free_irq_routing_table(new); + return r; }