From patchwork Tue Feb 18 15:46:17 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jan Beulich X-Patchwork-Id: 11388851 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id D69C313A4 for ; Tue, 18 Feb 2020 15:47:16 +0000 (UTC) Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id BC8252176D for ; Tue, 18 Feb 2020 15:47:16 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org BC8252176D Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=suse.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=xen-devel-bounces@lists.xenproject.org Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1j454m-0000TQ-Pz; Tue, 18 Feb 2020 15:46:20 +0000 Received: from us1-rack-iad1.inumbo.com ([172.99.69.81]) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1j454l-0000TJ-OR for xen-devel@lists.xenproject.org; Tue, 18 Feb 2020 15:46:19 +0000 X-Inumbo-ID: cdbca8e2-5265-11ea-ade5-bc764e2007e4 Received: from mx2.suse.de (unknown [195.135.220.15]) by us1-rack-iad1.inumbo.com (Halon) with ESMTPS id cdbca8e2-5265-11ea-ade5-bc764e2007e4; Tue, 18 Feb 2020 15:46:19 +0000 (UTC) X-Virus-Scanned: by amavisd-new at test-mx.suse.de Received: from relay2.suse.de (unknown [195.135.220.254]) by mx2.suse.de (Postfix) with ESMTP id 26EF8ABEA; Tue, 18 Feb 2020 15:46:18 +0000 (UTC) From: Jan Beulich To: "xen-devel@lists.xenproject.org" References: Message-ID: Date: Tue, 18 Feb 2020 16:46:17 +0100 User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:68.0) Gecko/20100101 Thunderbird/68.4.2 MIME-Version: 1.0 In-Reply-To: Content-Language: en-US Subject: [Xen-devel] [PATCH 1/5] libxl/PCI: honor multiple per-device reserved memory regions X-BeenThere: xen-devel@lists.xenproject.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Cc: Anthony Perard , Ian Jackson , Wei Liu Errors-To: xen-devel-bounces@lists.xenproject.org Sender: "Xen-devel" While in "host" strategy all regions get processed, of the per-device ones only the first entry has been consumed so far. Signed-off-by: Jan Beulich Acked-by: Wei Liu --- a/tools/libxl/libxl_dm.c +++ b/tools/libxl/libxl_dm.c @@ -471,8 +471,7 @@ int libxl__domain_device_construct_rdm(l /* Query RDM entries per-device */ for (i = 0; i < d_config->num_pcidevs; i++) { - unsigned int nr_entries; - bool new = true; + unsigned int n, nr_entries; seg = d_config->pcidevs[i].domain; bus = d_config->pcidevs[i].bus; @@ -489,36 +488,41 @@ int libxl__domain_device_construct_rdm(l assert(xrdm); - /* - * Need to check whether this entry is already saved in the array. - * This could come from two cases: - * - * - user may configure to get all RDMs in this platform, which - * is already queried before this point - * - or two assigned devices may share one RDM entry - * - * Different policies may be configured on the same RDM due to - * above two cases. But we don't allow to assign such a group - * devies right now so it doesn't come true in our case. - */ - for (j = 0; j < d_config->num_rdms; j++) { - if (d_config->rdms[j].start == pfn_to_paddr(xrdm[0].start_pfn)) - { - /* - * So the per-device policy always override the global - * policy in this case. - */ - d_config->rdms[j].policy = d_config->pcidevs[i].rdm_policy; - new = false; - break; + for (n = 0; n < nr_entries; ++n) { + bool new = true; + + /* + * Need to check whether this entry is already saved in the + * array. This could come from two cases: + * + * - user may configure to get all RDMs in this platform, + * which is already queried before this point + * - or two assigned devices may share one RDM entry + * + * Different policies may be configured on the same RDM due to + * above two cases. But we don't allow to assign such a group + * of devices right now so it doesn't come true in our case. + */ + for (j = 0; j < d_config->num_rdms; j++) { + if (d_config->rdms[j].start + == pfn_to_paddr(xrdm[n].start_pfn)) + { + /* + * So the per-device policy always override the + * global policy in this case. + */ + d_config->rdms[j].policy + = d_config->pcidevs[i].rdm_policy; + new = false; + break; + } } - } - if (new) { - add_rdm_entry(gc, d_config, - pfn_to_paddr(xrdm[0].start_pfn), - pfn_to_paddr(xrdm[0].nr_pages), - d_config->pcidevs[i].rdm_policy); + if (new) + add_rdm_entry(gc, d_config, + pfn_to_paddr(xrdm[n].start_pfn), + pfn_to_paddr(xrdm[n].nr_pages), + d_config->pcidevs[i].rdm_policy); } }