From patchwork Tue Dec 1 08:21:12 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: =?utf-8?b?SsO8cmdlbiBHcm/Dnw==?= X-Patchwork-Id: 11942207 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-18.7 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER, INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED, USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id D7B20C64E7A for ; Tue, 1 Dec 2020 08:21:43 +0000 (UTC) Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 4C0002087C for ; Tue, 1 Dec 2020 08:21:43 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (1024-bit key) header.d=suse.com header.i=@suse.com header.b="qmGLVYtT" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 4C0002087C Authentication-Results: mail.kernel.org; dmarc=fail (p=quarantine dis=none) header.from=suse.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=xen-devel-bounces@lists.xenproject.org Received: from list by lists.xenproject.org with outflank-mailman.41621.74909 (Exim 4.92) (envelope-from ) id 1kk0uk-0004VQ-ER; Tue, 01 Dec 2020 08:21:34 +0000 X-Outflank-Mailman: Message body and most headers restored to incoming version Received: by outflank-mailman (output) from mailman id 41621.74909; Tue, 01 Dec 2020 08:21:34 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1kk0uk-0004VI-B8; Tue, 01 Dec 2020 08:21:34 +0000 Received: by outflank-mailman (input) for mailman id 41621; Tue, 01 Dec 2020 08:21:32 +0000 Received: from us1-rack-iad1.inumbo.com ([172.99.69.81]) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1kk0ui-0004Uj-NE for xen-devel@lists.xenproject.org; Tue, 01 Dec 2020 08:21:32 +0000 Received: from mx2.suse.de (unknown [195.135.220.15]) by us1-rack-iad1.inumbo.com (Halon) with ESMTPS id 24573929-9acf-4325-b865-1e4397cf6f40; Tue, 01 Dec 2020 08:21:31 +0000 (UTC) Received: from relay2.suse.de (unknown [195.135.221.27]) by mx2.suse.de (Postfix) with ESMTP id 23AC5AC2F; Tue, 1 Dec 2020 08:21:31 +0000 (UTC) X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: 24573929-9acf-4325-b865-1e4397cf6f40 X-Virus-Scanned: by amavisd-new at test-mx.suse.de DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1; t=1606810891; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc: mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=uevcohjDdjyaRW4J/XJl3vfCGPtDJwSk8swOs/WEatk=; b=qmGLVYtTEwAPxQupeGB9PHxOUoB5b8zhWnwI/5dP1T9FMNI2wlSPT1uQZRV/2QP3WK32pu sVH1k4Z6Pcc/QzeWxFtbJqPxkGy8eavtdlCiMzvMnnLjTeFlM10K+f1NlHWRk1fqPaxCm1 jMKKHgbP8tCbU0C547Rmfxe6cvXHB+0= From: Juergen Gross To: xen-devel@lists.xenproject.org Cc: Juergen Gross , George Dunlap , Dario Faggioli Subject: [PATCH v2 01/17] xen/cpupool: add cpu to sched_res_mask when removing it from cpupool Date: Tue, 1 Dec 2020 09:21:12 +0100 Message-Id: <20201201082128.15239-2-jgross@suse.com> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20201201082128.15239-1-jgross@suse.com> References: <20201201082128.15239-1-jgross@suse.com> MIME-Version: 1.0 When a cpu is removed from a cpupool and added to the free cpus it should be added to sched_res_mask, too. The related removal from sched_res_mask in case of core scheduling is already done in schedule_cpu_add(). As long as all cpupools share the same scheduling granularity there is nothing going wrong with the missing addition, but this will change when per-cpupool granularity is fully supported. Signed-off-by: Juergen Gross Reviewed-by: Dario Faggioli --- xen/common/sched/core.c | 1 + 1 file changed, 1 insertion(+) diff --git a/xen/common/sched/core.c b/xen/common/sched/core.c index ed973e90ec..f8c81592af 100644 --- a/xen/common/sched/core.c +++ b/xen/common/sched/core.c @@ -3189,6 +3189,7 @@ int schedule_cpu_rm(unsigned int cpu) /* Adjust cpu masks of resources (old and new). */ cpumask_clear_cpu(cpu_iter, sr->cpus); cpumask_set_cpu(cpu_iter, sr_new[idx]->cpus); + cpumask_set_cpu(cpu_iter, &sched_res_mask); /* Init timer. */ init_timer(&sr_new[idx]->s_timer, s_timer_fn, NULL, cpu_iter); From patchwork Tue Dec 1 08:21:13 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: =?utf-8?b?SsO8cmdlbiBHcm/Dnw==?= X-Patchwork-Id: 11942217 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-18.7 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER, INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED, USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id CCED5C64E7A for ; Tue, 1 Dec 2020 08:21:53 +0000 (UTC) Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 631D42085B for ; Tue, 1 Dec 2020 08:21:53 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (1024-bit key) header.d=suse.com header.i=@suse.com header.b="UdjbPyXi" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 631D42085B Authentication-Results: mail.kernel.org; dmarc=fail (p=quarantine dis=none) header.from=suse.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=xen-devel-bounces@lists.xenproject.org Received: from list by lists.xenproject.org with outflank-mailman.41626.74966 (Exim 4.92) (envelope-from ) id 1kk0ut-0004jW-QB; Tue, 01 Dec 2020 08:21:43 +0000 X-Outflank-Mailman: Message body and most headers restored to incoming version Received: by outflank-mailman (output) from mailman id 41626.74966; Tue, 01 Dec 2020 08:21:43 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1kk0ut-0004j7-EH; Tue, 01 Dec 2020 08:21:43 +0000 Received: by outflank-mailman (input) for mailman id 41626; Tue, 01 Dec 2020 08:21:42 +0000 Received: from us1-rack-iad1.inumbo.com ([172.99.69.81]) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1kk0us-0004Uj-Ip for xen-devel@lists.xenproject.org; Tue, 01 Dec 2020 08:21:42 +0000 Received: from mx2.suse.de (unknown [195.135.220.15]) by us1-rack-iad1.inumbo.com (Halon) with ESMTPS id 6c5f4ada-2f3d-4529-8a2f-4cd8dd020c8a; Tue, 01 Dec 2020 08:21:32 +0000 (UTC) Received: from relay2.suse.de (unknown [195.135.221.27]) by mx2.suse.de (Postfix) with ESMTP id 47B51AD75; Tue, 1 Dec 2020 08:21:31 +0000 (UTC) X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: 6c5f4ada-2f3d-4529-8a2f-4cd8dd020c8a X-Virus-Scanned: by amavisd-new at test-mx.suse.de DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1; t=1606810891; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc: mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=ykS7CP8XhSozhB7wIOpYgHQJFC/AfOdzJDIhKXHUBwQ=; b=UdjbPyXi8Vb+7zlaMLgGUS4u+pFTXmkJPEKnOgIESqKr5fz57ciQ+DWuNyqzZVVjMA1AYd d0VXWMH6CDFwm9JQHmTGgnnssKaMVSctkvpzuJDOMSJSn9QLTvuHpZH99hkYXPvBUdUSAq NPeSUB247Q3iLQ4JZlLrTgVnaIQo5lM= From: Juergen Gross To: xen-devel@lists.xenproject.org Cc: Juergen Gross , Dario Faggioli , George Dunlap Subject: [PATCH v2 02/17] xen/cpupool: add missing bits for per-cpupool scheduling granularity Date: Tue, 1 Dec 2020 09:21:13 +0100 Message-Id: <20201201082128.15239-3-jgross@suse.com> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20201201082128.15239-1-jgross@suse.com> References: <20201201082128.15239-1-jgross@suse.com> MIME-Version: 1.0 Even with storing the scheduling granularity in struct cpupool there are still a few bits missing for being able to have cpupools with different granularity (apart from the missing interface for setting the individual granularities): the number of cpus in a scheduling unit is always taken from the global sched_granularity variable. So store the value in struct cpupool and use that instead of sched_granularity. Signed-off-by: Juergen Gross Reviewed-by: Dario Faggioli --- xen/common/sched/cpupool.c | 3 ++- xen/common/sched/private.h | 1 + 2 files changed, 3 insertions(+), 1 deletion(-) diff --git a/xen/common/sched/cpupool.c b/xen/common/sched/cpupool.c index 7ea641ca26..6429c8f7b5 100644 --- a/xen/common/sched/cpupool.c +++ b/xen/common/sched/cpupool.c @@ -151,7 +151,7 @@ static void __init cpupool_gran_init(void) unsigned int cpupool_get_granularity(const struct cpupool *c) { - return c ? sched_granularity : 1; + return c ? c->sched_gran : 1; } static void free_cpupool_struct(struct cpupool *c) @@ -289,6 +289,7 @@ static struct cpupool *cpupool_create( } c->sched->cpupool = c; c->gran = opt_sched_granularity; + c->sched_gran = sched_granularity; *q = c; diff --git a/xen/common/sched/private.h b/xen/common/sched/private.h index df50976eb2..685992cab9 100644 --- a/xen/common/sched/private.h +++ b/xen/common/sched/private.h @@ -514,6 +514,7 @@ struct cpupool struct scheduler *sched; atomic_t refcnt; enum sched_gran gran; + unsigned int sched_gran; /* Number of cpus per sched-item. */ }; static inline cpumask_t *cpupool_domain_master_cpumask(const struct domain *d) From patchwork Tue Dec 1 08:21:14 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: =?utf-8?b?SsO8cmdlbiBHcm/Dnw==?= X-Patchwork-Id: 11942209 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-18.7 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER, INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED, USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 1DEA5C64E8A for ; Tue, 1 Dec 2020 08:21:45 +0000 (UTC) Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 9B03D2087D for ; Tue, 1 Dec 2020 08:21:44 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (1024-bit key) header.d=suse.com header.i=@suse.com header.b="fV3iKp6w" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 9B03D2087D Authentication-Results: mail.kernel.org; dmarc=fail (p=quarantine dis=none) header.from=suse.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=xen-devel-bounces@lists.xenproject.org Received: from list by lists.xenproject.org with outflank-mailman.41622.74922 (Exim 4.92) (envelope-from ) id 1kk0ul-0004XB-NU; Tue, 01 Dec 2020 08:21:35 +0000 X-Outflank-Mailman: Message body and most headers restored to incoming version Received: by outflank-mailman (output) from mailman id 41622.74922; Tue, 01 Dec 2020 08:21:35 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1kk0ul-0004X2-JX; Tue, 01 Dec 2020 08:21:35 +0000 Received: by outflank-mailman (input) for mailman id 41622; Tue, 01 Dec 2020 08:21:34 +0000 Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1kk0uk-0004VK-E8 for xen-devel@lists.xenproject.org; Tue, 01 Dec 2020 08:21:34 +0000 Received: from mx2.suse.de (unknown [195.135.220.15]) by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS id ceb36211-fc02-48df-8343-c9d46e54bc3d; Tue, 01 Dec 2020 08:21:32 +0000 (UTC) Received: from relay2.suse.de (unknown [195.135.221.27]) by mx2.suse.de (Postfix) with ESMTP id 6FB70AD8A; Tue, 1 Dec 2020 08:21:31 +0000 (UTC) X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: ceb36211-fc02-48df-8343-c9d46e54bc3d X-Virus-Scanned: by amavisd-new at test-mx.suse.de DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1; t=1606810891; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc: mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=jtGFdMweTBJDc6UpYyzBO+QE6apiq4QvvZ8PlcUPilY=; b=fV3iKp6ww6im4i9yanZ1LMMB8Ak9va/BEB82ONP/RQrpk/0g6J4y3TdLmdqhdELhulomiq SK0UF0OWf9EsHY+fEBlrvq7LSh/Uu1+7xBcxxBzFF6UBSgfF6k0BgdAfdLGaOnzR+ZV0ZO I/jVlJIcQVK3rwtM3cquFV5Lq8eYRO8= From: Juergen Gross To: xen-devel@lists.xenproject.org Cc: Juergen Gross , Dario Faggioli , George Dunlap Subject: [PATCH v2 03/17] xen/cpupool: sort included headers in cpupool.c Date: Tue, 1 Dec 2020 09:21:14 +0100 Message-Id: <20201201082128.15239-4-jgross@suse.com> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20201201082128.15239-1-jgross@suse.com> References: <20201201082128.15239-1-jgross@suse.com> MIME-Version: 1.0 Common style is to include header files in alphabetical order. Sort the #include statements in cpupool.c accordingly. Signed-off-by: Juergen Gross Acked-by: Dario Faggioli --- xen/common/sched/cpupool.c | 8 ++++---- 1 file changed, 4 insertions(+), 4 deletions(-) diff --git a/xen/common/sched/cpupool.c b/xen/common/sched/cpupool.c index 6429c8f7b5..84f326ea63 100644 --- a/xen/common/sched/cpupool.c +++ b/xen/common/sched/cpupool.c @@ -11,15 +11,15 @@ * (C) 2009, Juergen Gross, Fujitsu Technology Solutions */ -#include -#include +#include #include +#include +#include +#include #include #include #include #include -#include -#include #include "private.h" From patchwork Tue Dec 1 08:21:15 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: =?utf-8?b?SsO8cmdlbiBHcm/Dnw==?= X-Patchwork-Id: 11942211 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-15.9 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER, INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS, UNWANTED_LANGUAGE_BODY,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 6734FC64E7A for ; Tue, 1 Dec 2020 08:21:47 +0000 (UTC) Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id D084E2087C for ; Tue, 1 Dec 2020 08:21:46 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (1024-bit key) header.d=suse.com header.i=@suse.com header.b="hdZaX9B1" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org D084E2087C Authentication-Results: mail.kernel.org; dmarc=fail (p=quarantine dis=none) header.from=suse.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=xen-devel-bounces@lists.xenproject.org Received: from list by lists.xenproject.org with outflank-mailman.41623.74934 (Exim 4.92) (envelope-from ) id 1kk0uo-0004Zw-16; Tue, 01 Dec 2020 08:21:38 +0000 X-Outflank-Mailman: Message body and most headers restored to incoming version Received: by outflank-mailman (output) from mailman id 41623.74934; Tue, 01 Dec 2020 08:21:37 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1kk0un-0004Zi-TT; Tue, 01 Dec 2020 08:21:37 +0000 Received: by outflank-mailman (input) for mailman id 41623; Tue, 01 Dec 2020 08:21:37 +0000 Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1kk0un-0004VK-5O for xen-devel@lists.xenproject.org; Tue, 01 Dec 2020 08:21:37 +0000 Received: from mx2.suse.de (unknown [195.135.220.15]) by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS id ee50f3c3-d2f7-44ea-9920-1b24a7ede1b2; Tue, 01 Dec 2020 08:21:32 +0000 (UTC) Received: from relay2.suse.de (unknown [195.135.221.27]) by mx2.suse.de (Postfix) with ESMTP id B77C0AD8C; Tue, 1 Dec 2020 08:21:31 +0000 (UTC) X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: ee50f3c3-d2f7-44ea-9920-1b24a7ede1b2 X-Virus-Scanned: by amavisd-new at test-mx.suse.de DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1; t=1606810891; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc: mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=+NBEIZ/iUOPwcq2uRt/bFtNmsh/ACI+DXq98XdmLhFc=; b=hdZaX9B1RgAYsdA+smYMrR7sOXU2D4lXiNtmBlqk9Mf5/VXLwUiDI6UiBOcH796x2QcFil t1XytGeNcSiE5W7plZ3w9C+bxO3dHz3P3d+3YMb8fO1odqhQj5UuS7/hw5XFkLjQ1rSv0V Gsjm0vREuLbyTM3rt3LfR7pYlUl2S40= From: Juergen Gross To: xen-devel@lists.xenproject.org Cc: Juergen Gross , George Dunlap , Dario Faggioli , Andrew Cooper , Ian Jackson , Jan Beulich , Julien Grall , Stefano Stabellini , Wei Liu Subject: [PATCH v2 04/17] xen/cpupool: switch cpupool id to unsigned Date: Tue, 1 Dec 2020 09:21:15 +0100 Message-Id: <20201201082128.15239-5-jgross@suse.com> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20201201082128.15239-1-jgross@suse.com> References: <20201201082128.15239-1-jgross@suse.com> MIME-Version: 1.0 The cpupool id is an unsigned value in the public interface header, so there is no reason why it is a signed value in struct cpupool. Switch it to unsigned int. Signed-off-by: Juergen Gross Reviewed-by: Dario Faggioli --- V2: - new patch --- xen/common/sched/core.c | 2 +- xen/common/sched/cpupool.c | 48 +++++++++++++++++++------------------- xen/common/sched/private.h | 8 +++---- xen/include/xen/sched.h | 4 ++-- 4 files changed, 31 insertions(+), 31 deletions(-) diff --git a/xen/common/sched/core.c b/xen/common/sched/core.c index f8c81592af..6063f6d9ea 100644 --- a/xen/common/sched/core.c +++ b/xen/common/sched/core.c @@ -757,7 +757,7 @@ void sched_destroy_vcpu(struct vcpu *v) } } -int sched_init_domain(struct domain *d, int poolid) +int sched_init_domain(struct domain *d, unsigned int poolid) { void *sdom; int ret; diff --git a/xen/common/sched/cpupool.c b/xen/common/sched/cpupool.c index 84f326ea63..01fa71dd00 100644 --- a/xen/common/sched/cpupool.c +++ b/xen/common/sched/cpupool.c @@ -187,7 +187,7 @@ static struct cpupool *alloc_cpupool_struct(void) * the searched id is returned * returns NULL if not found. */ -static struct cpupool *__cpupool_find_by_id(int id, bool exact) +static struct cpupool *__cpupool_find_by_id(unsigned int id, bool exact) { struct cpupool **q; @@ -200,12 +200,12 @@ static struct cpupool *__cpupool_find_by_id(int id, bool exact) return (!exact || (*q == NULL) || ((*q)->cpupool_id == id)) ? *q : NULL; } -static struct cpupool *cpupool_find_by_id(int poolid) +static struct cpupool *cpupool_find_by_id(unsigned int poolid) { return __cpupool_find_by_id(poolid, true); } -static struct cpupool *__cpupool_get_by_id(int poolid, bool exact) +static struct cpupool *__cpupool_get_by_id(unsigned int poolid, bool exact) { struct cpupool *c; spin_lock(&cpupool_lock); @@ -216,12 +216,12 @@ static struct cpupool *__cpupool_get_by_id(int poolid, bool exact) return c; } -struct cpupool *cpupool_get_by_id(int poolid) +struct cpupool *cpupool_get_by_id(unsigned int poolid) { return __cpupool_get_by_id(poolid, true); } -static struct cpupool *cpupool_get_next_by_id(int poolid) +static struct cpupool *cpupool_get_next_by_id(unsigned int poolid) { return __cpupool_get_by_id(poolid, false); } @@ -243,11 +243,11 @@ void cpupool_put(struct cpupool *pool) * - unknown scheduler */ static struct cpupool *cpupool_create( - int poolid, unsigned int sched_id, int *perr) + unsigned int poolid, unsigned int sched_id, int *perr) { struct cpupool *c; struct cpupool **q; - int last = 0; + unsigned int last = 0; *perr = -ENOMEM; if ( (c = alloc_cpupool_struct()) == NULL ) @@ -256,7 +256,7 @@ static struct cpupool *cpupool_create( /* One reference for caller, one reference for cpupool_destroy(). */ atomic_set(&c->refcnt, 2); - debugtrace_printk("cpupool_create(pool=%d,sched=%u)\n", poolid, sched_id); + debugtrace_printk("cpupool_create(pool=%u,sched=%u)\n", poolid, sched_id); spin_lock(&cpupool_lock); @@ -295,7 +295,7 @@ static struct cpupool *cpupool_create( spin_unlock(&cpupool_lock); - debugtrace_printk("Created cpupool %d with scheduler %s (%s)\n", + debugtrace_printk("Created cpupool %u with scheduler %s (%s)\n", c->cpupool_id, c->sched->name, c->sched->opt_name); *perr = 0; @@ -337,7 +337,7 @@ static int cpupool_destroy(struct cpupool *c) cpupool_put(c); - debugtrace_printk("cpupool_destroy(pool=%d)\n", c->cpupool_id); + debugtrace_printk("cpupool_destroy(pool=%u)\n", c->cpupool_id); return 0; } @@ -521,7 +521,7 @@ static long cpupool_unassign_cpu_helper(void *info) struct cpupool *c = info; long ret; - debugtrace_printk("cpupool_unassign_cpu(pool=%d,cpu=%d)\n", + debugtrace_printk("cpupool_unassign_cpu(pool=%u,cpu=%d)\n", cpupool_cpu_moving->cpupool_id, cpupool_moving_cpu); spin_lock(&cpupool_lock); @@ -551,7 +551,7 @@ static int cpupool_unassign_cpu(struct cpupool *c, unsigned int cpu) int ret; unsigned int master_cpu; - debugtrace_printk("cpupool_unassign_cpu(pool=%d,cpu=%d)\n", + debugtrace_printk("cpupool_unassign_cpu(pool=%u,cpu=%d)\n", c->cpupool_id, cpu); if ( !cpu_online(cpu) ) @@ -561,7 +561,7 @@ static int cpupool_unassign_cpu(struct cpupool *c, unsigned int cpu) ret = cpupool_unassign_cpu_start(c, master_cpu); if ( ret ) { - debugtrace_printk("cpupool_unassign_cpu(pool=%d,cpu=%d) ret %d\n", + debugtrace_printk("cpupool_unassign_cpu(pool=%u,cpu=%d) ret %d\n", c->cpupool_id, cpu, ret); return ret; } @@ -582,7 +582,7 @@ static int cpupool_unassign_cpu(struct cpupool *c, unsigned int cpu) * - pool does not exist * - no cpu assigned to pool */ -int cpupool_add_domain(struct domain *d, int poolid) +int cpupool_add_domain(struct domain *d, unsigned int poolid) { struct cpupool *c; int rc; @@ -604,7 +604,7 @@ int cpupool_add_domain(struct domain *d, int poolid) rc = 0; } spin_unlock(&cpupool_lock); - debugtrace_printk("cpupool_add_domain(dom=%d,pool=%d) n_dom %d rc %d\n", + debugtrace_printk("cpupool_add_domain(dom=%d,pool=%u) n_dom %d rc %d\n", d->domain_id, poolid, n_dom, rc); return rc; } @@ -614,7 +614,7 @@ int cpupool_add_domain(struct domain *d, int poolid) */ void cpupool_rm_domain(struct domain *d) { - int cpupool_id; + unsigned int cpupool_id; int n_dom; if ( d->cpupool == NULL ) @@ -625,7 +625,7 @@ void cpupool_rm_domain(struct domain *d) n_dom = d->cpupool->n_dom; d->cpupool = NULL; spin_unlock(&cpupool_lock); - debugtrace_printk("cpupool_rm_domain(dom=%d,pool=%d) n_dom %d\n", + debugtrace_printk("cpupool_rm_domain(dom=%d,pool=%u) n_dom %d\n", d->domain_id, cpupool_id, n_dom); return; } @@ -767,7 +767,7 @@ int cpupool_do_sysctl(struct xen_sysctl_cpupool_op *op) case XEN_SYSCTL_CPUPOOL_OP_CREATE: { - int poolid; + unsigned int poolid; poolid = (op->cpupool_id == XEN_SYSCTL_CPUPOOL_PAR_ANY) ? CPUPOOLID_NONE: op->cpupool_id; @@ -811,7 +811,7 @@ int cpupool_do_sysctl(struct xen_sysctl_cpupool_op *op) const cpumask_t *cpus; cpu = op->cpu; - debugtrace_printk("cpupool_assign_cpu(pool=%d,cpu=%d)\n", + debugtrace_printk("cpupool_assign_cpu(pool=%u,cpu=%u)\n", op->cpupool_id, cpu); spin_lock(&cpupool_lock); @@ -844,7 +844,7 @@ int cpupool_do_sysctl(struct xen_sysctl_cpupool_op *op) addcpu_out: spin_unlock(&cpupool_lock); - debugtrace_printk("cpupool_assign_cpu(pool=%d,cpu=%d) ret %d\n", + debugtrace_printk("cpupool_assign_cpu(pool=%u,cpu=%u) ret %d\n", op->cpupool_id, cpu, ret); } @@ -885,7 +885,7 @@ int cpupool_do_sysctl(struct xen_sysctl_cpupool_op *op) rcu_unlock_domain(d); break; } - debugtrace_printk("cpupool move_domain(dom=%d)->pool=%d\n", + debugtrace_printk("cpupool move_domain(dom=%d)->pool=%u\n", d->domain_id, op->cpupool_id); ret = -ENOENT; spin_lock(&cpupool_lock); @@ -895,7 +895,7 @@ int cpupool_do_sysctl(struct xen_sysctl_cpupool_op *op) ret = cpupool_move_domain_locked(d, c); spin_unlock(&cpupool_lock); - debugtrace_printk("cpupool move_domain(dom=%d)->pool=%d ret %d\n", + debugtrace_printk("cpupool move_domain(dom=%d)->pool=%u ret %d\n", d->domain_id, op->cpupool_id, ret); rcu_unlock_domain(d); } @@ -916,7 +916,7 @@ int cpupool_do_sysctl(struct xen_sysctl_cpupool_op *op) return ret; } -int cpupool_get_id(const struct domain *d) +unsigned int cpupool_get_id(const struct domain *d) { return d->cpupool ? d->cpupool->cpupool_id : CPUPOOLID_NONE; } @@ -946,7 +946,7 @@ void dump_runq(unsigned char key) for_each_cpupool(c) { - printk("Cpupool %d:\n", (*c)->cpupool_id); + printk("Cpupool %u:\n", (*c)->cpupool_id); printk("Cpus: %*pbl\n", CPUMASK_PR((*c)->cpu_valid)); sched_gran_print((*c)->gran, cpupool_get_granularity(*c)); schedule_dump(*c); diff --git a/xen/common/sched/private.h b/xen/common/sched/private.h index 685992cab9..e69d9be1e8 100644 --- a/xen/common/sched/private.h +++ b/xen/common/sched/private.h @@ -505,8 +505,8 @@ static inline void sched_unit_unpause(const struct sched_unit *unit) struct cpupool { - int cpupool_id; -#define CPUPOOLID_NONE (-1) + unsigned int cpupool_id; +#define CPUPOOLID_NONE (~0U) unsigned int n_dom; cpumask_var_t cpu_valid; /* all cpus assigned to pool */ cpumask_var_t res_valid; /* all scheduling resources of pool */ @@ -601,9 +601,9 @@ int cpu_disable_scheduler(unsigned int cpu); int schedule_cpu_add(unsigned int cpu, struct cpupool *c); int schedule_cpu_rm(unsigned int cpu); int sched_move_domain(struct domain *d, struct cpupool *c); -struct cpupool *cpupool_get_by_id(int poolid); +struct cpupool *cpupool_get_by_id(unsigned int poolid); void cpupool_put(struct cpupool *pool); -int cpupool_add_domain(struct domain *d, int poolid); +int cpupool_add_domain(struct domain *d, unsigned int poolid); void cpupool_rm_domain(struct domain *d); #endif /* __XEN_SCHED_IF_H__ */ diff --git a/xen/include/xen/sched.h b/xen/include/xen/sched.h index a345cc01f8..b2878e7b2a 100644 --- a/xen/include/xen/sched.h +++ b/xen/include/xen/sched.h @@ -691,7 +691,7 @@ void noreturn asm_domain_crash_synchronous(unsigned long addr); void scheduler_init(void); int sched_init_vcpu(struct vcpu *v); void sched_destroy_vcpu(struct vcpu *v); -int sched_init_domain(struct domain *d, int poolid); +int sched_init_domain(struct domain *d, unsigned int poolid); void sched_destroy_domain(struct domain *d); long sched_adjust(struct domain *, struct xen_domctl_scheduler_op *); long sched_adjust_global(struct xen_sysctl_scheduler_op *); @@ -1089,7 +1089,7 @@ static always_inline bool is_cpufreq_controller(const struct domain *d) int cpupool_move_domain(struct domain *d, struct cpupool *c); int cpupool_do_sysctl(struct xen_sysctl_cpupool_op *op); -int cpupool_get_id(const struct domain *d); +unsigned int cpupool_get_id(const struct domain *d); const cpumask_t *cpupool_valid_cpus(const struct cpupool *pool); extern void dump_runq(unsigned char key); From patchwork Tue Dec 1 08:21:16 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: =?utf-8?b?SsO8cmdlbiBHcm/Dnw==?= X-Patchwork-Id: 11942221 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-18.7 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER, INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED, USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 03224C64E7B for ; Tue, 1 Dec 2020 08:22:02 +0000 (UTC) Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 8A4122085B for ; Tue, 1 Dec 2020 08:22:01 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (1024-bit key) header.d=suse.com header.i=@suse.com header.b="ILvsAfc3" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 8A4122085B Authentication-Results: mail.kernel.org; dmarc=fail (p=quarantine dis=none) header.from=suse.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=xen-devel-bounces@lists.xenproject.org Received: from list by lists.xenproject.org with outflank-mailman.41630.74988 (Exim 4.92) (envelope-from ) id 1kk0uz-0004v6-Uc; Tue, 01 Dec 2020 08:21:49 +0000 X-Outflank-Mailman: Message body and most headers restored to incoming version Received: by outflank-mailman (output) from mailman id 41630.74988; Tue, 01 Dec 2020 08:21:49 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1kk0uz-0004uW-FQ; Tue, 01 Dec 2020 08:21:49 +0000 Received: by outflank-mailman (input) for mailman id 41630; Tue, 01 Dec 2020 08:21:47 +0000 Received: from us1-rack-iad1.inumbo.com ([172.99.69.81]) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1kk0ux-0004Uj-JC for xen-devel@lists.xenproject.org; Tue, 01 Dec 2020 08:21:47 +0000 Received: from mx2.suse.de (unknown [195.135.220.15]) by us1-rack-iad1.inumbo.com (Halon) with ESMTPS id d4e9ee5b-0ccc-45a7-ab93-42acbd9a5809; Tue, 01 Dec 2020 08:21:33 +0000 (UTC) Received: from relay2.suse.de (unknown [195.135.221.27]) by mx2.suse.de (Postfix) with ESMTP id D6DE7ADA2; Tue, 1 Dec 2020 08:21:31 +0000 (UTC) X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: d4e9ee5b-0ccc-45a7-ab93-42acbd9a5809 X-Virus-Scanned: by amavisd-new at test-mx.suse.de DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1; t=1606810891; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc: mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=mU9Pbrvb5/SbczGpyv+43AFg9f7JL1E6nM4w4+Kc8Ys=; b=ILvsAfc3DQfxMeBh7/UeyY7thcPBDBnKKQx/m85uM/6vnUDiWbyPot23PoNk501NzKpZEM KKYg3q5EdesuiQhDwclZrDFL62v2x8JPBIqUHpyfsi2HGSgabmKh+LC1RrVB5LNMxo5k/E vbgd9p2QtYA+jDJDCKCnQ77jyX/Ezns= From: Juergen Gross To: xen-devel@lists.xenproject.org Cc: Juergen Gross , Dario Faggioli , George Dunlap Subject: [PATCH v2 05/17] xen/cpupool: switch cpupool list to normal list interface Date: Tue, 1 Dec 2020 09:21:16 +0100 Message-Id: <20201201082128.15239-6-jgross@suse.com> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20201201082128.15239-1-jgross@suse.com> References: <20201201082128.15239-1-jgross@suse.com> MIME-Version: 1.0 Instead of open coding something like a linked list just use the available functionality from list.h. The allocation of a new cpupool id is not aware of a possible wrap. Fix that. While adding the required new include to private.h sort the includes. Signed-off-by: From: Juergen Gross Reviewed-by: Dario Faggioli --- V2: - new patch Signed-off-by: Juergen Gross --- xen/common/sched/cpupool.c | 100 ++++++++++++++++++++----------------- xen/common/sched/private.h | 4 +- 2 files changed, 57 insertions(+), 47 deletions(-) diff --git a/xen/common/sched/cpupool.c b/xen/common/sched/cpupool.c index 01fa71dd00..714cd47ae9 100644 --- a/xen/common/sched/cpupool.c +++ b/xen/common/sched/cpupool.c @@ -16,6 +16,7 @@ #include #include #include +#include #include #include #include @@ -23,13 +24,10 @@ #include "private.h" -#define for_each_cpupool(ptr) \ - for ((ptr) = &cpupool_list; *(ptr) != NULL; (ptr) = &((*(ptr))->next)) - struct cpupool *cpupool0; /* Initial cpupool with Dom0 */ cpumask_t cpupool_free_cpus; /* cpus not in any cpupool */ -static struct cpupool *cpupool_list; /* linked list, sorted by poolid */ +static LIST_HEAD(cpupool_list); /* linked list, sorted by poolid */ static int cpupool_moving_cpu = -1; static struct cpupool *cpupool_cpu_moving = NULL; @@ -189,15 +187,15 @@ static struct cpupool *alloc_cpupool_struct(void) */ static struct cpupool *__cpupool_find_by_id(unsigned int id, bool exact) { - struct cpupool **q; + struct cpupool *q; ASSERT(spin_is_locked(&cpupool_lock)); - for_each_cpupool(q) - if ( (*q)->cpupool_id >= id ) - break; + list_for_each_entry(q, &cpupool_list, list) + if ( q->cpupool_id == id || (!exact && q->cpupool_id > id) ) + return q; - return (!exact || (*q == NULL) || ((*q)->cpupool_id == id)) ? *q : NULL; + return NULL; } static struct cpupool *cpupool_find_by_id(unsigned int poolid) @@ -246,8 +244,7 @@ static struct cpupool *cpupool_create( unsigned int poolid, unsigned int sched_id, int *perr) { struct cpupool *c; - struct cpupool **q; - unsigned int last = 0; + struct cpupool *q; *perr = -ENOMEM; if ( (c = alloc_cpupool_struct()) == NULL ) @@ -260,23 +257,42 @@ static struct cpupool *cpupool_create( spin_lock(&cpupool_lock); - for_each_cpupool(q) + if ( poolid != CPUPOOLID_NONE ) { - last = (*q)->cpupool_id; - if ( (poolid != CPUPOOLID_NONE) && (last >= poolid) ) - break; + q = __cpupool_find_by_id(poolid, false); + if ( !q ) + list_add_tail(&c->list, &cpupool_list); + else + { + list_add_tail(&c->list, &q->list); + if ( q->cpupool_id == poolid ) + { + *perr = -EEXIST; + goto err; + } + } + + c->cpupool_id = poolid; } - if ( *q != NULL ) + else { - if ( (*q)->cpupool_id == poolid ) + /* Cpupool 0 is created with specified id at boot and never removed. */ + ASSERT(!list_empty(&cpupool_list)); + + q = list_last_entry(&cpupool_list, struct cpupool, list); + /* In case of wrap search for first free id. */ + if ( q->cpupool_id == CPUPOOLID_NONE - 1 ) { - *perr = -EEXIST; - goto err; + list_for_each_entry(q, &cpupool_list, list) + if ( q->cpupool_id + 1 != list_next_entry(q, list)->cpupool_id ) + break; } - c->next = *q; + + list_add(&c->list, &q->list); + + c->cpupool_id = q->cpupool_id + 1; } - c->cpupool_id = (poolid == CPUPOOLID_NONE) ? (last + 1) : poolid; if ( poolid == 0 ) { c->sched = scheduler_get_default(); @@ -291,8 +307,6 @@ static struct cpupool *cpupool_create( c->gran = opt_sched_granularity; c->sched_gran = sched_granularity; - *q = c; - spin_unlock(&cpupool_lock); debugtrace_printk("Created cpupool %u with scheduler %s (%s)\n", @@ -302,6 +316,8 @@ static struct cpupool *cpupool_create( return c; err: + list_del(&c->list); + spin_unlock(&cpupool_lock); free_cpupool_struct(c); return NULL; @@ -312,27 +328,19 @@ static struct cpupool *cpupool_create( * possible failures: * - pool still in use * - cpus still assigned to pool - * - pool not in list */ static int cpupool_destroy(struct cpupool *c) { - struct cpupool **q; - spin_lock(&cpupool_lock); - for_each_cpupool(q) - if ( *q == c ) - break; - if ( *q != c ) - { - spin_unlock(&cpupool_lock); - return -ENOENT; - } + if ( (c->n_dom != 0) || cpumask_weight(c->cpu_valid) ) { spin_unlock(&cpupool_lock); return -EBUSY; } - *q = c->next; + + list_del(&c->list); + spin_unlock(&cpupool_lock); cpupool_put(c); @@ -732,17 +740,17 @@ static int cpupool_cpu_remove_prologue(unsigned int cpu) */ static void cpupool_cpu_remove_forced(unsigned int cpu) { - struct cpupool **c; + struct cpupool *c; int ret; unsigned int master_cpu = sched_get_resource_cpu(cpu); - for_each_cpupool ( c ) + list_for_each_entry(c, &cpupool_list, list) { - if ( cpumask_test_cpu(master_cpu, (*c)->cpu_valid) ) + if ( cpumask_test_cpu(master_cpu, c->cpu_valid) ) { - ret = cpupool_unassign_cpu_start(*c, master_cpu); + ret = cpupool_unassign_cpu_start(c, master_cpu); BUG_ON(ret); - ret = cpupool_unassign_cpu_finish(*c); + ret = cpupool_unassign_cpu_finish(c); BUG_ON(ret); } } @@ -929,7 +937,7 @@ const cpumask_t *cpupool_valid_cpus(const struct cpupool *pool) void dump_runq(unsigned char key) { s_time_t now = NOW(); - struct cpupool **c; + struct cpupool *c; spin_lock(&cpupool_lock); @@ -944,12 +952,12 @@ void dump_runq(unsigned char key) schedule_dump(NULL); } - for_each_cpupool(c) + list_for_each_entry(c, &cpupool_list, list) { - printk("Cpupool %u:\n", (*c)->cpupool_id); - printk("Cpus: %*pbl\n", CPUMASK_PR((*c)->cpu_valid)); - sched_gran_print((*c)->gran, cpupool_get_granularity(*c)); - schedule_dump(*c); + printk("Cpupool %u:\n", c->cpupool_id); + printk("Cpus: %*pbl\n", CPUMASK_PR(c->cpu_valid)); + sched_gran_print(c->gran, cpupool_get_granularity(c)); + schedule_dump(c); } spin_unlock(&cpupool_lock); diff --git a/xen/common/sched/private.h b/xen/common/sched/private.h index e69d9be1e8..6953cefa6e 100644 --- a/xen/common/sched/private.h +++ b/xen/common/sched/private.h @@ -8,8 +8,9 @@ #ifndef __XEN_SCHED_IF_H__ #define __XEN_SCHED_IF_H__ -#include #include +#include +#include #include /* cpus currently in no cpupool */ @@ -510,6 +511,7 @@ struct cpupool unsigned int n_dom; cpumask_var_t cpu_valid; /* all cpus assigned to pool */ cpumask_var_t res_valid; /* all scheduling resources of pool */ + struct list_head list; struct cpupool *next; struct scheduler *sched; atomic_t refcnt; From patchwork Tue Dec 1 08:21:17 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: =?utf-8?b?SsO8cmdlbiBHcm/Dnw==?= X-Patchwork-Id: 11942219 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-18.7 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER, INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED, USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 83533C64E8A for ; Tue, 1 Dec 2020 08:22:01 +0000 (UTC) Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 11F882087D for ; Tue, 1 Dec 2020 08:22:00 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (1024-bit key) header.d=suse.com header.i=@suse.com header.b="RIPBYEVf" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 11F882087D Authentication-Results: mail.kernel.org; dmarc=fail (p=quarantine dis=none) header.from=suse.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=xen-devel-bounces@lists.xenproject.org Received: from list by lists.xenproject.org with outflank-mailman.41629.74982 (Exim 4.92) (envelope-from ) id 1kk0uz-0004tc-4z; Tue, 01 Dec 2020 08:21:49 +0000 X-Outflank-Mailman: Message body and most headers restored to incoming version Received: by outflank-mailman (output) from mailman id 41629.74982; Tue, 01 Dec 2020 08:21:49 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1kk0uz-0004tT-0C; Tue, 01 Dec 2020 08:21:49 +0000 Received: by outflank-mailman (input) for mailman id 41629; Tue, 01 Dec 2020 08:21:47 +0000 Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1kk0ux-0004VK-5f for xen-devel@lists.xenproject.org; Tue, 01 Dec 2020 08:21:47 +0000 Received: from mx2.suse.de (unknown [195.135.220.15]) by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS id 47543d46-c993-4e43-b9b2-580602c148f4; Tue, 01 Dec 2020 08:21:33 +0000 (UTC) Received: from relay2.suse.de (unknown [195.135.221.27]) by mx2.suse.de (Postfix) with ESMTP id 08B27ADB3; Tue, 1 Dec 2020 08:21:32 +0000 (UTC) X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: 47543d46-c993-4e43-b9b2-580602c148f4 X-Virus-Scanned: by amavisd-new at test-mx.suse.de DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1; t=1606810892; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc: mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=GDlThUHp1khXcMw14yFbWzJcy5msoaFCmMjGVwghDTw=; b=RIPBYEVfMZ0L6sdH0/scPIO3SUWJ8lAPqqmZDol9/cWCRirYglJ1FgSX3NSmO3Nu5pX/Lq 4WFVSznlXIbgXjQ2yZ9id3JAVFzKg0DdKbWCHcvMaUAB8PRGhVyc3eV/Rwol412BKcfIOD ikPqSQ/8m2KDtAfgCoLyP1ZQYOA4A+U= From: Juergen Gross To: xen-devel@lists.xenproject.org Cc: Juergen Gross , George Dunlap , Dario Faggioli Subject: [PATCH v2 06/17] xen/cpupool: use ERR_PTR() for returning error cause from cpupool_create() Date: Tue, 1 Dec 2020 09:21:17 +0100 Message-Id: <20201201082128.15239-7-jgross@suse.com> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20201201082128.15239-1-jgross@suse.com> References: <20201201082128.15239-1-jgross@suse.com> MIME-Version: 1.0 Instead of a pointer to an error variable as parameter just use ERR_PTR() to return the cause of an error in cpupool_create(). This propagates to scheduler_alloc(), too. Signed-off-by: Juergen Gross Reviewed-by: Jan Beulich Reviewed-by: Dario Faggioli --- V2: - new patch --- xen/common/sched/core.c | 13 ++++++------- xen/common/sched/cpupool.c | 38 ++++++++++++++++++++------------------ xen/common/sched/private.h | 2 +- 3 files changed, 27 insertions(+), 26 deletions(-) diff --git a/xen/common/sched/core.c b/xen/common/sched/core.c index 6063f6d9ea..a429fc7640 100644 --- a/xen/common/sched/core.c +++ b/xen/common/sched/core.c @@ -3233,26 +3233,25 @@ struct scheduler *scheduler_get_default(void) return &ops; } -struct scheduler *scheduler_alloc(unsigned int sched_id, int *perr) +struct scheduler *scheduler_alloc(unsigned int sched_id) { int i; + int ret; struct scheduler *sched; for ( i = 0; i < NUM_SCHEDULERS; i++ ) if ( schedulers[i] && schedulers[i]->sched_id == sched_id ) goto found; - *perr = -ENOENT; - return NULL; + return ERR_PTR(-ENOENT); found: - *perr = -ENOMEM; if ( (sched = xmalloc(struct scheduler)) == NULL ) - return NULL; + return ERR_PTR(-ENOMEM); memcpy(sched, schedulers[i], sizeof(*sched)); - if ( (*perr = sched_init(sched)) != 0 ) + if ( (ret = sched_init(sched)) != 0 ) { xfree(sched); - sched = NULL; + sched = ERR_PTR(ret); } return sched; diff --git a/xen/common/sched/cpupool.c b/xen/common/sched/cpupool.c index 714cd47ae9..0db7d77219 100644 --- a/xen/common/sched/cpupool.c +++ b/xen/common/sched/cpupool.c @@ -240,15 +240,15 @@ void cpupool_put(struct cpupool *pool) * - poolid already used * - unknown scheduler */ -static struct cpupool *cpupool_create( - unsigned int poolid, unsigned int sched_id, int *perr) +static struct cpupool *cpupool_create(unsigned int poolid, + unsigned int sched_id) { struct cpupool *c; struct cpupool *q; + int ret; - *perr = -ENOMEM; if ( (c = alloc_cpupool_struct()) == NULL ) - return NULL; + return ERR_PTR(-ENOMEM); /* One reference for caller, one reference for cpupool_destroy(). */ atomic_set(&c->refcnt, 2); @@ -267,7 +267,7 @@ static struct cpupool *cpupool_create( list_add_tail(&c->list, &q->list); if ( q->cpupool_id == poolid ) { - *perr = -EEXIST; + ret = -EEXIST; goto err; } } @@ -294,15 +294,15 @@ static struct cpupool *cpupool_create( } if ( poolid == 0 ) - { c->sched = scheduler_get_default(); - } else + c->sched = scheduler_alloc(sched_id); + if ( IS_ERR(c->sched) ) { - c->sched = scheduler_alloc(sched_id, perr); - if ( c->sched == NULL ) - goto err; + ret = PTR_ERR(c->sched); + goto err; } + c->sched->cpupool = c; c->gran = opt_sched_granularity; c->sched_gran = sched_granularity; @@ -312,15 +312,16 @@ static struct cpupool *cpupool_create( debugtrace_printk("Created cpupool %u with scheduler %s (%s)\n", c->cpupool_id, c->sched->name, c->sched->opt_name); - *perr = 0; return c; err: list_del(&c->list); spin_unlock(&cpupool_lock); + free_cpupool_struct(c); - return NULL; + + return ERR_PTR(ret); } /* * destroys the given cpupool @@ -767,7 +768,7 @@ static void cpupool_cpu_remove_forced(unsigned int cpu) */ int cpupool_do_sysctl(struct xen_sysctl_cpupool_op *op) { - int ret; + int ret = 0; struct cpupool *c; switch ( op->op ) @@ -779,8 +780,10 @@ int cpupool_do_sysctl(struct xen_sysctl_cpupool_op *op) poolid = (op->cpupool_id == XEN_SYSCTL_CPUPOOL_PAR_ANY) ? CPUPOOLID_NONE: op->cpupool_id; - c = cpupool_create(poolid, op->sched_id, &ret); - if ( c != NULL ) + c = cpupool_create(poolid, op->sched_id); + if ( IS_ERR(c) ) + ret = PTR_ERR(c); + else { op->cpupool_id = c->cpupool_id; cpupool_put(c); @@ -1003,12 +1006,11 @@ static struct notifier_block cpu_nfb = { static int __init cpupool_init(void) { unsigned int cpu; - int err; cpupool_gran_init(); - cpupool0 = cpupool_create(0, 0, &err); - BUG_ON(cpupool0 == NULL); + cpupool0 = cpupool_create(0, 0); + BUG_ON(IS_ERR(cpupool0)); cpupool_put(cpupool0); register_cpu_notifier(&cpu_nfb); diff --git a/xen/common/sched/private.h b/xen/common/sched/private.h index 6953cefa6e..92d0d49610 100644 --- a/xen/common/sched/private.h +++ b/xen/common/sched/private.h @@ -597,7 +597,7 @@ void sched_rm_cpu(unsigned int cpu); const cpumask_t *sched_get_opt_cpumask(enum sched_gran opt, unsigned int cpu); void schedule_dump(struct cpupool *c); struct scheduler *scheduler_get_default(void); -struct scheduler *scheduler_alloc(unsigned int sched_id, int *perr); +struct scheduler *scheduler_alloc(unsigned int sched_id); void scheduler_free(struct scheduler *sched); int cpu_disable_scheduler(unsigned int cpu); int schedule_cpu_add(unsigned int cpu, struct cpupool *c); From patchwork Tue Dec 1 08:21:18 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: =?utf-8?b?SsO8cmdlbiBHcm/Dnw==?= X-Patchwork-Id: 11942225 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-18.7 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER, INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED, USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 0BE8FC64E7A for ; Tue, 1 Dec 2020 08:22:06 +0000 (UTC) Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 8ADE72085B for ; Tue, 1 Dec 2020 08:22:05 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (1024-bit key) header.d=suse.com header.i=@suse.com header.b="EEKGCdba" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 8ADE72085B Authentication-Results: mail.kernel.org; dmarc=fail (p=quarantine dis=none) header.from=suse.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=xen-devel-bounces@lists.xenproject.org Received: from list by lists.xenproject.org with outflank-mailman.41640.75012 (Exim 4.92) (envelope-from ) id 1kk0v4-00055m-Vp; Tue, 01 Dec 2020 08:21:55 +0000 X-Outflank-Mailman: Message body and most headers restored to incoming version Received: by outflank-mailman (output) from mailman id 41640.75012; Tue, 01 Dec 2020 08:21:54 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1kk0v4-00055R-Ka; Tue, 01 Dec 2020 08:21:54 +0000 Received: by outflank-mailman (input) for mailman id 41640; Tue, 01 Dec 2020 08:21:52 +0000 Received: from us1-rack-iad1.inumbo.com ([172.99.69.81]) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1kk0v2-0004Uj-JJ for xen-devel@lists.xenproject.org; Tue, 01 Dec 2020 08:21:52 +0000 Received: from mx2.suse.de (unknown [195.135.220.15]) by us1-rack-iad1.inumbo.com (Halon) with ESMTPS id d5433781-6557-465d-9090-c775c13054ae; Tue, 01 Dec 2020 08:21:33 +0000 (UTC) Received: from relay2.suse.de (unknown [195.135.221.27]) by mx2.suse.de (Postfix) with ESMTP id 2DC4CAEFF; Tue, 1 Dec 2020 08:21:32 +0000 (UTC) X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: d5433781-6557-465d-9090-c775c13054ae X-Virus-Scanned: by amavisd-new at test-mx.suse.de DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1; t=1606810892; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc: mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=708vSf4hdDLmhqVFIbpBu/E6M/WDwL53lyQ6wvfCs8c=; b=EEKGCdbaUJZM5ne+c6o2Igd6i7OnsH1A4VTVrMWIIJ8haSEigJ728u0IGOeBVBvrGYgD0G sZf9D/plSzZkk4iAqQKZVxKeKecfF4J/jpq2Tz6BeS2ijXbTAaGnfnE2i4avbbshJxq0pS iekW+Jec6R8Lugat7JGWYuvemBryrL8= From: Juergen Gross To: xen-devel@lists.xenproject.org Cc: Juergen Gross , George Dunlap , Dario Faggioli Subject: [PATCH v2 07/17] xen/cpupool: support moving domain between cpupools with different granularity Date: Tue, 1 Dec 2020 09:21:18 +0100 Message-Id: <20201201082128.15239-8-jgross@suse.com> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20201201082128.15239-1-jgross@suse.com> References: <20201201082128.15239-1-jgross@suse.com> MIME-Version: 1.0 When moving a domain between cpupools with different scheduling granularity the sched_units of the domain need to be adjusted. Do that by allocating new sched_units and throwing away the old ones in sched_move_domain(). Signed-off-by: Juergen Gross --- xen/common/sched/core.c | 121 ++++++++++++++++++++++++++++++---------- 1 file changed, 90 insertions(+), 31 deletions(-) diff --git a/xen/common/sched/core.c b/xen/common/sched/core.c index a429fc7640..2a61c879b3 100644 --- a/xen/common/sched/core.c +++ b/xen/common/sched/core.c @@ -613,17 +613,45 @@ static void sched_move_irqs(const struct sched_unit *unit) vcpu_move_irqs(v); } +/* + * Move a domain from one cpupool to another. + * + * A domain with any vcpu having temporary affinity settings will be denied + * to move. Hard and soft affinities will be reset. + * + * In order to support cpupools with different scheduling granularities all + * scheduling units are replaced by new ones. + * + * The complete move is done in the following steps: + * - check prerequisites (no vcpu with temporary affinities) + * - allocate all new data structures (scheduler specific domain data, unit + * memory, scheduler specific unit data) + * - pause domain + * - temporarily move all (old) units to the same scheduling resource (this + * makes the final resource assignment easier in case the new cpupool has + * a larger granularity than the old one, as the scheduling locks for all + * vcpus must be held for that operation) + * - remove old units from scheduling + * - set new cpupool and scheduler domain data pointers in struct domain + * - switch all vcpus to new units, still assigned to the old scheduling + * resource + * - migrate all new units to scheduling resources of the new cpupool + * - unpause the domain + * - free the old memory (scheduler specific domain data, unit memory, + * scheduler specific unit data) + */ int sched_move_domain(struct domain *d, struct cpupool *c) { struct vcpu *v; - struct sched_unit *unit; + struct sched_unit *unit, *old_unit; + struct sched_unit *new_units = NULL, *old_units; + struct sched_unit **unit_ptr = &new_units; unsigned int new_p, unit_idx; - void **unit_priv; void *domdata; - void *unitdata; - struct scheduler *old_ops; + struct scheduler *old_ops = dom_scheduler(d); void *old_domdata; unsigned int gran = cpupool_get_granularity(c); + unsigned int n_units = DIV_ROUND_UP(d->max_vcpus, gran); int ret = 0; for_each_vcpu ( d, v ) @@ -641,53 +669,78 @@ int sched_move_domain(struct domain *d, struct cpupool *c) goto out; } - unit_priv = xzalloc_array(void *, DIV_ROUND_UP(d->max_vcpus, gran)); - if ( unit_priv == NULL ) + for ( unit_idx = 0; unit_idx < n_units; unit_idx++ ) { - sched_free_domdata(c->sched, domdata); - ret = -ENOMEM; - goto out; - } + unit = sched_alloc_unit_mem(); + if ( unit ) + { + /* Initialize unit for sched_alloc_udata() to work. */ + unit->domain = d; + unit->unit_id = unit_idx * gran; + unit->vcpu_list = d->vcpu[unit->unit_id]; + unit->priv = sched_alloc_udata(c->sched, unit, domdata); + *unit_ptr = unit; + } - unit_idx = 0; - for_each_sched_unit ( d, unit ) - { - unit_priv[unit_idx] = sched_alloc_udata(c->sched, unit, domdata); - if ( unit_priv[unit_idx] == NULL ) + if ( !unit || !unit->priv ) { - for ( unit_idx = 0; unit_priv[unit_idx]; unit_idx++ ) - sched_free_udata(c->sched, unit_priv[unit_idx]); - xfree(unit_priv); - sched_free_domdata(c->sched, domdata); + old_units = new_units; + old_domdata = domdata; ret = -ENOMEM; - goto out; + goto out_free; } - unit_idx++; + + unit_ptr = &unit->next_in_list; } domain_pause(d); - old_ops = dom_scheduler(d); old_domdata = d->sched_priv; + new_p = cpumask_first(d->cpupool->cpu_valid); for_each_sched_unit ( d, unit ) { + spinlock_t *lock; + + /* + * Temporarily move all units to same processor to make locking + * easier when moving the new units to the new processors. + */ + lock = unit_schedule_lock_irq(unit); + sched_set_res(unit, get_sched_res(new_p)); + spin_unlock_irq(lock); + sched_remove_unit(old_ops, unit); } + old_units = d->sched_unit_list; + d->cpupool = c; d->sched_priv = domdata; + unit = new_units; + for_each_vcpu ( d, v ) + { + old_unit = v->sched_unit; + if ( unit->unit_id + gran == v->vcpu_id ) + unit = unit->next_in_list; + + unit->state_entry_time = old_unit->state_entry_time; + unit->runstate_cnt[v->runstate.state]++; + /* Temporarily use old resource assignment */ + unit->res = get_sched_res(new_p); + + v->sched_unit = unit; + } + + d->sched_unit_list = new_units; + new_p = cpumask_first(c->cpu_valid); - unit_idx = 0; for_each_sched_unit ( d, unit ) { spinlock_t *lock; unsigned int unit_p = new_p; - unitdata = unit->priv; - unit->priv = unit_priv[unit_idx]; - for_each_sched_unit_vcpu ( unit, v ) { migrate_timer(&v->periodic_timer, new_p); @@ -713,8 +766,6 @@ int sched_move_domain(struct domain *d, struct cpupool *c) sched_insert_unit(c->sched, unit); - sched_free_udata(old_ops, unitdata); - unit_idx++; } @@ -722,11 +773,19 @@ int sched_move_domain(struct domain *d, struct cpupool *c) domain_unpause(d); - sched_free_domdata(old_ops, old_domdata); + out_free: + for ( unit = old_units; unit; ) + { + if ( unit->priv ) + sched_free_udata(c->sched, unit->priv); + old_unit = unit; + unit = unit->next_in_list; + xfree(old_unit); + } - xfree(unit_priv); + sched_free_domdata(old_ops, old_domdata); -out: + out: rcu_read_unlock(&sched_res_rculock); return ret; From patchwork Tue Dec 1 08:21:19 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: =?utf-8?b?SsO8cmdlbiBHcm/Dnw==?= X-Patchwork-Id: 11942215 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-18.7 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER, INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED, USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 5DA6DC64E7B for ; Tue, 1 Dec 2020 08:21:51 +0000 (UTC) Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id D74F9216C4 for ; Tue, 1 Dec 2020 08:21:50 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (1024-bit key) header.d=suse.com header.i=@suse.com header.b="MDEmfBgG" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org D74F9216C4 Authentication-Results: mail.kernel.org; dmarc=fail (p=quarantine dis=none) header.from=suse.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=xen-devel-bounces@lists.xenproject.org Received: from list by lists.xenproject.org with outflank-mailman.41625.74958 (Exim 4.92) (envelope-from ) id 1kk0ut-0004iL-5O; Tue, 01 Dec 2020 08:21:43 +0000 X-Outflank-Mailman: Message body and most headers restored to incoming version Received: by outflank-mailman (output) from mailman id 41625.74958; Tue, 01 Dec 2020 08:21:43 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1kk0ut-0004i9-1C; Tue, 01 Dec 2020 08:21:43 +0000 Received: by outflank-mailman (input) for mailman id 41625; Tue, 01 Dec 2020 08:21:42 +0000 Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1kk0us-0004VK-5P for xen-devel@lists.xenproject.org; Tue, 01 Dec 2020 08:21:42 +0000 Received: from mx2.suse.de (unknown [195.135.220.15]) by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS id 7e073957-b4d3-4490-a2b8-ed309ab44ce7; Tue, 01 Dec 2020 08:21:33 +0000 (UTC) Received: from relay2.suse.de (unknown [195.135.221.27]) by mx2.suse.de (Postfix) with ESMTP id 6EDDFAF0B; Tue, 1 Dec 2020 08:21:32 +0000 (UTC) X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: 7e073957-b4d3-4490-a2b8-ed309ab44ce7 X-Virus-Scanned: by amavisd-new at test-mx.suse.de DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1; t=1606810892; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc: mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=qB6FBv+cM43jZdWdb0tnpaGR6+tOQZRO3dWAITjC9q0=; b=MDEmfBgGtA6DduOQH1C265ZvdQ2g1BmM2oxV9IeH0i3JY28sWofN5W0mJt1SBw0VLG/3/y Xwk99r5DsCgc5iRluiuKF/q6TxjBq4/0/wQ+Yo5Bl+aR6dFVdbhWIl+AjBeakPAH+55Nmd QNdqGsjlsuEUWeWDk5xu/3+agETy7iM= From: Juergen Gross To: xen-devel@lists.xenproject.org Cc: Juergen Gross , Andrew Cooper , George Dunlap , Ian Jackson , Jan Beulich , Julien Grall , Stefano Stabellini , Wei Liu Subject: [PATCH v2 08/17] docs: fix hypfs path documentation Date: Tue, 1 Dec 2020 09:21:19 +0100 Message-Id: <20201201082128.15239-9-jgross@suse.com> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20201201082128.15239-1-jgross@suse.com> References: <20201201082128.15239-1-jgross@suse.com> MIME-Version: 1.0 The /params/* entry is missing a writable tag. Signed-off-by: Juergen Gross Acked-by: Jan Beulich --- docs/misc/hypfs-paths.pandoc | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/docs/misc/hypfs-paths.pandoc b/docs/misc/hypfs-paths.pandoc index dddb592bc5..6c7b2f7ee3 100644 --- a/docs/misc/hypfs-paths.pandoc +++ b/docs/misc/hypfs-paths.pandoc @@ -179,7 +179,7 @@ The minor version of Xen. A directory of runtime parameters. -#### /params/* +#### /params/* [w] The individual parameters. The description of the different parameters can be found in `docs/misc/xen-command-line.pandoc`. From patchwork Tue Dec 1 08:21:20 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: =?utf-8?b?SsO8cmdlbiBHcm/Dnw==?= X-Patchwork-Id: 11942223 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-18.7 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER, INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED, USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 854C6C71156 for ; Tue, 1 Dec 2020 08:22:03 +0000 (UTC) Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 105CA2087D for ; Tue, 1 Dec 2020 08:22:03 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (1024-bit key) header.d=suse.com header.i=@suse.com header.b="aLg6Utsd" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 105CA2087D Authentication-Results: mail.kernel.org; dmarc=fail (p=quarantine dis=none) header.from=suse.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=xen-devel-bounces@lists.xenproject.org Received: from list by lists.xenproject.org with outflank-mailman.41639.75006 (Exim 4.92) (envelope-from ) id 1kk0v4-00054R-9L; Tue, 01 Dec 2020 08:21:54 +0000 X-Outflank-Mailman: Message body and most headers restored to incoming version Received: by outflank-mailman (output) from mailman id 41639.75006; Tue, 01 Dec 2020 08:21:54 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1kk0v4-00054E-4b; Tue, 01 Dec 2020 08:21:54 +0000 Received: by outflank-mailman (input) for mailman id 41639; Tue, 01 Dec 2020 08:21:52 +0000 Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1kk0v2-0004VK-5i for xen-devel@lists.xenproject.org; Tue, 01 Dec 2020 08:21:52 +0000 Received: from mx2.suse.de (unknown [195.135.220.15]) by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS id 1c2e290f-45b7-4f0b-88e8-d03f1bef03bd; Tue, 01 Dec 2020 08:21:33 +0000 (UTC) Received: from relay2.suse.de (unknown [195.135.221.27]) by mx2.suse.de (Postfix) with ESMTP id AB4D9AF0C; Tue, 1 Dec 2020 08:21:32 +0000 (UTC) X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: 1c2e290f-45b7-4f0b-88e8-d03f1bef03bd X-Virus-Scanned: by amavisd-new at test-mx.suse.de DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1; t=1606810892; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc: mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=kUqinpHgs7lQk4VZmE9ndgJqbAEUMbPxpoEKorm7mzA=; b=aLg6UtsdqCn5u7PfntKCiyCpCF7kvj0R65v/6un82KaisYDyo2w/IllrDI1vPDMZ7C6asF fZ6A/M0p8FXu0LnT+JFWn3Lf5nqHF47aEk9dVs8F9qe2PTnaAHulrmq3lsNiTSHUIO2yno L8DmQOD0hJeA4od635U9bI5hsdEPp+w= From: Juergen Gross To: xen-devel@lists.xenproject.org Cc: Juergen Gross , Andrew Cooper , George Dunlap , Ian Jackson , Jan Beulich , Julien Grall , Stefano Stabellini , Wei Liu Subject: [PATCH v2 09/17] xen/hypfs: move per-node function pointers into a dedicated struct Date: Tue, 1 Dec 2020 09:21:20 +0100 Message-Id: <20201201082128.15239-10-jgross@suse.com> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20201201082128.15239-1-jgross@suse.com> References: <20201201082128.15239-1-jgross@suse.com> MIME-Version: 1.0 Move the function pointers currently stored in each hypfs node into a dedicated structure in order to save some space for each node. This will save even more space with additional callbacks added in future. Provide some standard function vectors. Instead of testing the write pointer to be not NULL provide a dummy function just returning -EACCESS. ASSERT() all vector entries being populated when adding a node. This avoids any potential problem (e.g. pv domain privilege escalations) in case of calling a non populated vector entry. Signed-off-by: Juergen Gross Reviewed-by: Jan Beulich --- V2: - make function vector const (Jan Beulich) - don't allow any NULL entry (Jan Beulich) - add callback comment --- xen/common/hypfs.c | 41 ++++++++++++++++++++---- xen/include/xen/hypfs.h | 71 ++++++++++++++++++++++++++++------------- xen/include/xen/param.h | 15 +++------ 3 files changed, 88 insertions(+), 39 deletions(-) diff --git a/xen/common/hypfs.c b/xen/common/hypfs.c index 8e932b5cf9..7befd144ba 100644 --- a/xen/common/hypfs.c +++ b/xen/common/hypfs.c @@ -24,6 +24,27 @@ CHECK_hypfs_dirlistentry; (DIRENTRY_NAME_OFF + \ ROUNDUP((name_len) + 1, alignof(struct xen_hypfs_direntry))) +const struct hypfs_funcs hypfs_dir_funcs = { + .read = hypfs_read_dir, + .write = hypfs_write_deny, +}; +const struct hypfs_funcs hypfs_leaf_ro_funcs = { + .read = hypfs_read_leaf, + .write = hypfs_write_deny, +}; +const struct hypfs_funcs hypfs_leaf_wr_funcs = { + .read = hypfs_read_leaf, + .write = hypfs_write_leaf, +}; +const struct hypfs_funcs hypfs_bool_wr_funcs = { + .read = hypfs_read_leaf, + .write = hypfs_write_bool, +}; +const struct hypfs_funcs hypfs_custom_wr_funcs = { + .read = hypfs_read_leaf, + .write = hypfs_write_custom, +}; + static DEFINE_RWLOCK(hypfs_lock); enum hypfs_lock_state { hypfs_unlocked, @@ -74,6 +95,9 @@ static int add_entry(struct hypfs_entry_dir *parent, struct hypfs_entry *new) int ret = -ENOENT; struct hypfs_entry *e; + ASSERT(new->funcs->read); + ASSERT(new->funcs->write); + hypfs_write_lock(); list_for_each_entry ( e, &parent->dirlist, list ) @@ -284,7 +308,7 @@ static int hypfs_read(const struct hypfs_entry *entry, guest_handle_add_offset(uaddr, sizeof(e)); - ret = entry->read(entry, uaddr); + ret = entry->funcs->read(entry, uaddr); out: return ret; @@ -297,6 +321,7 @@ int hypfs_write_leaf(struct hypfs_entry_leaf *leaf, int ret; ASSERT(this_cpu(hypfs_locked) == hypfs_write_locked); + ASSERT(leaf->e.max_size); if ( ulen > leaf->e.max_size ) return -ENOSPC; @@ -357,6 +382,7 @@ int hypfs_write_custom(struct hypfs_entry_leaf *leaf, int ret; ASSERT(this_cpu(hypfs_locked) == hypfs_write_locked); + ASSERT(leaf->e.max_size); /* Avoid oversized buffer allocation. */ if ( ulen > MAX_PARAM_SIZE ) @@ -382,19 +408,20 @@ int hypfs_write_custom(struct hypfs_entry_leaf *leaf, return ret; } +int hypfs_write_deny(struct hypfs_entry_leaf *leaf, + XEN_GUEST_HANDLE_PARAM(void) uaddr, unsigned int ulen) +{ + return -EACCES; +} + static int hypfs_write(struct hypfs_entry *entry, XEN_GUEST_HANDLE_PARAM(void) uaddr, unsigned long ulen) { struct hypfs_entry_leaf *l; - if ( !entry->write ) - return -EACCES; - - ASSERT(entry->max_size); - l = container_of(entry, struct hypfs_entry_leaf, e); - return entry->write(l, uaddr, ulen); + return entry->funcs->write(l, uaddr, ulen); } long do_hypfs_op(unsigned int cmd, diff --git a/xen/include/xen/hypfs.h b/xen/include/xen/hypfs.h index 5ad99cb558..25fdf3ead7 100644 --- a/xen/include/xen/hypfs.h +++ b/xen/include/xen/hypfs.h @@ -7,6 +7,32 @@ #include struct hypfs_entry_leaf; +struct hypfs_entry; + +/* + * Per-node callbacks: + * + * The callbacks are always called with the hypfs lock held. + * + * The read() callback is used to return the contents of a node (either + * directory or leaf). It is NOT used to get directory entries during traversal + * of the tree. + * + * The write() callback is used to modify the contents of a node. Writing + * directories is not supported (this means all nodes are added at boot time). + */ +struct hypfs_funcs { + int (*read)(const struct hypfs_entry *entry, + XEN_GUEST_HANDLE_PARAM(void) uaddr); + int (*write)(struct hypfs_entry_leaf *leaf, + XEN_GUEST_HANDLE_PARAM(void) uaddr, unsigned int ulen); +}; + +extern const struct hypfs_funcs hypfs_dir_funcs; +extern const struct hypfs_funcs hypfs_leaf_ro_funcs; +extern const struct hypfs_funcs hypfs_leaf_wr_funcs; +extern const struct hypfs_funcs hypfs_bool_wr_funcs; +extern const struct hypfs_funcs hypfs_custom_wr_funcs; struct hypfs_entry { unsigned short type; @@ -15,10 +41,7 @@ struct hypfs_entry { unsigned int max_size; const char *name; struct list_head list; - int (*read)(const struct hypfs_entry *entry, - XEN_GUEST_HANDLE_PARAM(void) uaddr); - int (*write)(struct hypfs_entry_leaf *leaf, - XEN_GUEST_HANDLE_PARAM(void) uaddr, unsigned int ulen); + const struct hypfs_funcs *funcs; }; struct hypfs_entry_leaf { @@ -42,7 +65,7 @@ struct hypfs_entry_dir { .e.size = 0, \ .e.max_size = 0, \ .e.list = LIST_HEAD_INIT(var.e.list), \ - .e.read = hypfs_read_dir, \ + .e.funcs = &hypfs_dir_funcs, \ .dirlist = LIST_HEAD_INIT(var.dirlist), \ } @@ -52,7 +75,7 @@ struct hypfs_entry_dir { .e.encoding = XEN_HYPFS_ENC_PLAIN, \ .e.name = (nam), \ .e.max_size = (msz), \ - .e.read = hypfs_read_leaf, \ + .e.funcs = &hypfs_leaf_ro_funcs, \ } /* Content and size need to be set via hypfs_string_set_reference(). */ @@ -72,35 +95,37 @@ static inline void hypfs_string_set_reference(struct hypfs_entry_leaf *leaf, leaf->e.size = strlen(str) + 1; } -#define HYPFS_FIXEDSIZE_INIT(var, typ, nam, contvar, wr) \ - struct hypfs_entry_leaf __read_mostly var = { \ - .e.type = (typ), \ - .e.encoding = XEN_HYPFS_ENC_PLAIN, \ - .e.name = (nam), \ - .e.size = sizeof(contvar), \ - .e.max_size = (wr) ? sizeof(contvar) : 0, \ - .e.read = hypfs_read_leaf, \ - .e.write = (wr), \ - .u.content = &(contvar), \ +#define HYPFS_FIXEDSIZE_INIT(var, typ, nam, contvar, fn, wr) \ + struct hypfs_entry_leaf __read_mostly var = { \ + .e.type = (typ), \ + .e.encoding = XEN_HYPFS_ENC_PLAIN, \ + .e.name = (nam), \ + .e.size = sizeof(contvar), \ + .e.max_size = (wr) ? sizeof(contvar) : 0, \ + .e.funcs = (fn), \ + .u.content = &(contvar), \ } #define HYPFS_UINT_INIT(var, nam, contvar) \ - HYPFS_FIXEDSIZE_INIT(var, XEN_HYPFS_TYPE_UINT, nam, contvar, NULL) + HYPFS_FIXEDSIZE_INIT(var, XEN_HYPFS_TYPE_UINT, nam, contvar, \ + &hypfs_leaf_ro_funcs, 0) #define HYPFS_UINT_INIT_WRITABLE(var, nam, contvar) \ HYPFS_FIXEDSIZE_INIT(var, XEN_HYPFS_TYPE_UINT, nam, contvar, \ - hypfs_write_leaf) + &hypfs_leaf_wr_funcs, 1) #define HYPFS_INT_INIT(var, nam, contvar) \ - HYPFS_FIXEDSIZE_INIT(var, XEN_HYPFS_TYPE_INT, nam, contvar, NULL) + HYPFS_FIXEDSIZE_INIT(var, XEN_HYPFS_TYPE_INT, nam, contvar, \ + &hypfs_leaf_ro_funcs, 0) #define HYPFS_INT_INIT_WRITABLE(var, nam, contvar) \ HYPFS_FIXEDSIZE_INIT(var, XEN_HYPFS_TYPE_INT, nam, contvar, \ - hypfs_write_leaf) + &hypfs_leaf_wr_funcs, 1) #define HYPFS_BOOL_INIT(var, nam, contvar) \ - HYPFS_FIXEDSIZE_INIT(var, XEN_HYPFS_TYPE_BOOL, nam, contvar, NULL) + HYPFS_FIXEDSIZE_INIT(var, XEN_HYPFS_TYPE_BOOL, nam, contvar, \ + &hypfs_leaf_ro_funcs, 0) #define HYPFS_BOOL_INIT_WRITABLE(var, nam, contvar) \ HYPFS_FIXEDSIZE_INIT(var, XEN_HYPFS_TYPE_BOOL, nam, contvar, \ - hypfs_write_bool) + &hypfs_bool_wr_funcs, 1) extern struct hypfs_entry_dir hypfs_root; @@ -112,6 +137,8 @@ int hypfs_read_dir(const struct hypfs_entry *entry, XEN_GUEST_HANDLE_PARAM(void) uaddr); int hypfs_read_leaf(const struct hypfs_entry *entry, XEN_GUEST_HANDLE_PARAM(void) uaddr); +int hypfs_write_deny(struct hypfs_entry_leaf *leaf, + XEN_GUEST_HANDLE_PARAM(void) uaddr, unsigned int ulen); int hypfs_write_leaf(struct hypfs_entry_leaf *leaf, XEN_GUEST_HANDLE_PARAM(void) uaddr, unsigned int ulen); int hypfs_write_bool(struct hypfs_entry_leaf *leaf, diff --git a/xen/include/xen/param.h b/xen/include/xen/param.h index d0409d3a0e..1b2c7db954 100644 --- a/xen/include/xen/param.h +++ b/xen/include/xen/param.h @@ -116,8 +116,7 @@ extern struct param_hypfs __paramhypfs_start[], __paramhypfs_end[]; { .hypfs.e.type = XEN_HYPFS_TYPE_STRING, \ .hypfs.e.encoding = XEN_HYPFS_ENC_PLAIN, \ .hypfs.e.name = (nam), \ - .hypfs.e.read = hypfs_read_leaf, \ - .hypfs.e.write = hypfs_write_custom, \ + .hypfs.e.funcs = &hypfs_custom_wr_funcs, \ .init_leaf = (initfunc), \ .func = (variable) } #define boolean_runtime_only_param(nam, variable) \ @@ -127,8 +126,7 @@ extern struct param_hypfs __paramhypfs_start[], __paramhypfs_end[]; .hypfs.e.name = (nam), \ .hypfs.e.size = sizeof(variable), \ .hypfs.e.max_size = sizeof(variable), \ - .hypfs.e.read = hypfs_read_leaf, \ - .hypfs.e.write = hypfs_write_bool, \ + .hypfs.e.funcs = &hypfs_bool_wr_funcs, \ .hypfs.u.content = &(variable) } #define integer_runtime_only_param(nam, variable) \ __paramfs __parfs_##variable = \ @@ -137,8 +135,7 @@ extern struct param_hypfs __paramhypfs_start[], __paramhypfs_end[]; .hypfs.e.name = (nam), \ .hypfs.e.size = sizeof(variable), \ .hypfs.e.max_size = sizeof(variable), \ - .hypfs.e.read = hypfs_read_leaf, \ - .hypfs.e.write = hypfs_write_leaf, \ + .hypfs.e.funcs = &hypfs_leaf_wr_funcs, \ .hypfs.u.content = &(variable) } #define size_runtime_only_param(nam, variable) \ __paramfs __parfs_##variable = \ @@ -147,8 +144,7 @@ extern struct param_hypfs __paramhypfs_start[], __paramhypfs_end[]; .hypfs.e.name = (nam), \ .hypfs.e.size = sizeof(variable), \ .hypfs.e.max_size = sizeof(variable), \ - .hypfs.e.read = hypfs_read_leaf, \ - .hypfs.e.write = hypfs_write_leaf, \ + .hypfs.e.funcs = &hypfs_leaf_wr_funcs, \ .hypfs.u.content = &(variable) } #define string_runtime_only_param(nam, variable) \ __paramfs __parfs_##variable = \ @@ -157,8 +153,7 @@ extern struct param_hypfs __paramhypfs_start[], __paramhypfs_end[]; .hypfs.e.name = (nam), \ .hypfs.e.size = 0, \ .hypfs.e.max_size = sizeof(variable), \ - .hypfs.e.read = hypfs_read_leaf, \ - .hypfs.e.write = hypfs_write_leaf, \ + .hypfs.e.funcs = &hypfs_leaf_wr_funcs, \ .hypfs.u.content = &(variable) } #else From patchwork Tue Dec 1 08:21:21 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: =?utf-8?b?SsO8cmdlbiBHcm/Dnw==?= X-Patchwork-Id: 11942229 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-18.7 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER, INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED, USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 72286C83011 for ; Tue, 1 Dec 2020 08:22:09 +0000 (UTC) Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id EF808206E3 for ; Tue, 1 Dec 2020 08:22:08 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (1024-bit key) header.d=suse.com header.i=@suse.com header.b="CP9Vx9M9" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org EF808206E3 Authentication-Results: mail.kernel.org; dmarc=fail (p=quarantine dis=none) header.from=suse.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=xen-devel-bounces@lists.xenproject.org Received: from list by lists.xenproject.org with outflank-mailman.41645.75035 (Exim 4.92) (envelope-from ) id 1kk0v9-0005GU-KK; Tue, 01 Dec 2020 08:21:59 +0000 X-Outflank-Mailman: Message body and most headers restored to incoming version Received: by outflank-mailman (output) from mailman id 41645.75035; Tue, 01 Dec 2020 08:21:59 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1kk0v9-0005Fn-8i; Tue, 01 Dec 2020 08:21:59 +0000 Received: by outflank-mailman (input) for mailman id 41645; Tue, 01 Dec 2020 08:21:57 +0000 Received: from us1-rack-iad1.inumbo.com ([172.99.69.81]) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1kk0v7-0004Uj-JN for xen-devel@lists.xenproject.org; Tue, 01 Dec 2020 08:21:57 +0000 Received: from mx2.suse.de (unknown [195.135.220.15]) by us1-rack-iad1.inumbo.com (Halon) with ESMTPS id f2ebf806-5cc1-4f39-8b25-a2bca9abd9d3; Tue, 01 Dec 2020 08:21:33 +0000 (UTC) Received: from relay2.suse.de (unknown [195.135.221.27]) by mx2.suse.de (Postfix) with ESMTP id E5E62AF10; Tue, 1 Dec 2020 08:21:32 +0000 (UTC) X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: f2ebf806-5cc1-4f39-8b25-a2bca9abd9d3 X-Virus-Scanned: by amavisd-new at test-mx.suse.de DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1; t=1606810893; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc: mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=P8wgYZkUcmfVZFhet14HurBZ05V1wN1xeSm132mMJYo=; b=CP9Vx9M9inEdF2kGeks3UZVfFpSP8Y7VsG8c1gvih5SH9oigsXhd+CRbtMtozh+4eTh3fA C+y1UgnGOuIYp0DyctDWe4YfIFfiR1lJr4zA6q81m4iNO47eQTKvDUjVpAz/M1e40+EIBy XOukMaKP1VcaHdDN0TXpkMbnEEYVnM4= From: Juergen Gross To: xen-devel@lists.xenproject.org Cc: Juergen Gross , Andrew Cooper , George Dunlap , Ian Jackson , Jan Beulich , Julien Grall , Stefano Stabellini , Wei Liu Subject: [PATCH v2 10/17] xen/hypfs: pass real failure reason up from hypfs_get_entry() Date: Tue, 1 Dec 2020 09:21:21 +0100 Message-Id: <20201201082128.15239-11-jgross@suse.com> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20201201082128.15239-1-jgross@suse.com> References: <20201201082128.15239-1-jgross@suse.com> MIME-Version: 1.0 Instead of handling all errors from hypfs_get_entry() as ENOENT pass up the real error value via ERR_PTR(). Signed-off-by: Juergen Gross Reviewed-by: Jan Beulich --- xen/common/hypfs.c | 12 ++++++------ 1 file changed, 6 insertions(+), 6 deletions(-) diff --git a/xen/common/hypfs.c b/xen/common/hypfs.c index 7befd144ba..fdfd0f764a 100644 --- a/xen/common/hypfs.c +++ b/xen/common/hypfs.c @@ -187,7 +187,7 @@ static struct hypfs_entry *hypfs_get_entry_rel(struct hypfs_entry_dir *dir, while ( again ) { if ( dir->e.type != XEN_HYPFS_TYPE_DIR ) - return NULL; + return ERR_PTR(-ENOENT); if ( !*path ) return &dir->e; @@ -206,7 +206,7 @@ static struct hypfs_entry *hypfs_get_entry_rel(struct hypfs_entry_dir *dir, struct hypfs_entry_dir, e); if ( cmp < 0 ) - return NULL; + return ERR_PTR(-ENOENT); if ( !cmp && strlen(entry->name) == name_len ) { if ( !*end ) @@ -221,13 +221,13 @@ static struct hypfs_entry *hypfs_get_entry_rel(struct hypfs_entry_dir *dir, } } - return NULL; + return ERR_PTR(-ENOENT); } static struct hypfs_entry *hypfs_get_entry(const char *path) { if ( path[0] != '/' ) - return NULL; + return ERR_PTR(-EINVAL); return hypfs_get_entry_rel(&hypfs_root, path + 1); } @@ -454,9 +454,9 @@ long do_hypfs_op(unsigned int cmd, goto out; entry = hypfs_get_entry(path); - if ( !entry ) + if ( IS_ERR(entry) ) { - ret = -ENOENT; + ret = PTR_ERR(entry); goto out; } From patchwork Tue Dec 1 08:21:22 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: =?utf-8?b?SsO8cmdlbiBHcm/Dnw==?= X-Patchwork-Id: 11942231 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-18.7 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER, INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED, USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 0D744C64E7B for ; Tue, 1 Dec 2020 08:22:12 +0000 (UTC) Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 8B239206E3 for ; Tue, 1 Dec 2020 08:22:11 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (1024-bit key) header.d=suse.com header.i=@suse.com header.b="GNqocWCj" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 8B239206E3 Authentication-Results: mail.kernel.org; dmarc=fail (p=quarantine dis=none) header.from=suse.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=xen-devel-bounces@lists.xenproject.org Received: from list by lists.xenproject.org with outflank-mailman.41650.75053 (Exim 4.92) (envelope-from ) id 1kk0vE-0005QC-1y; Tue, 01 Dec 2020 08:22:04 +0000 X-Outflank-Mailman: Message body and most headers restored to incoming version Received: by outflank-mailman (output) from mailman id 41650.75053; Tue, 01 Dec 2020 08:22:04 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1kk0vD-0005Q1-TB; Tue, 01 Dec 2020 08:22:03 +0000 Received: by outflank-mailman (input) for mailman id 41650; Tue, 01 Dec 2020 08:22:02 +0000 Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1kk0vC-0004VK-64 for xen-devel@lists.xenproject.org; Tue, 01 Dec 2020 08:22:02 +0000 Received: from mx2.suse.de (unknown [195.135.220.15]) by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS id 8fe17956-cceb-44ac-9b50-da0c9bf4e05d; Tue, 01 Dec 2020 08:21:34 +0000 (UTC) Received: from relay2.suse.de (unknown [195.135.221.27]) by mx2.suse.de (Postfix) with ESMTP id 3CA09AF19; Tue, 1 Dec 2020 08:21:33 +0000 (UTC) X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: 8fe17956-cceb-44ac-9b50-da0c9bf4e05d X-Virus-Scanned: by amavisd-new at test-mx.suse.de DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1; t=1606810893; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc: mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=8P3UHikA7pHBmIcZYgCpCp4RulCN+1OvDj1DbtBRNCk=; b=GNqocWCji7UO4coxDFPLzqvgECSysm/zPrIqYeFIp17FIBp0Lmx77eaUclfLp9dIhxOLDo K/uRNZsOtQ38yUwi4g8WJfTzYWq7x95fDw2X4gmoK/hD68vYUsXFvKolmxWGdg+JJQ8hC+ vHR6gfebNf+TnpCr1qn4NJbUqjeTniE= From: Juergen Gross To: xen-devel@lists.xenproject.org Cc: Juergen Gross , Andrew Cooper , George Dunlap , Ian Jackson , Jan Beulich , Julien Grall , Stefano Stabellini , Wei Liu Subject: [PATCH v2 11/17] xen/hypfs: add getsize() and findentry() callbacks to hypfs_funcs Date: Tue, 1 Dec 2020 09:21:22 +0100 Message-Id: <20201201082128.15239-12-jgross@suse.com> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20201201082128.15239-1-jgross@suse.com> References: <20201201082128.15239-1-jgross@suse.com> MIME-Version: 1.0 Add a getsize() function pointer to struct hypfs_funcs for being able to have dynamically filled entries without the need to take the hypfs lock each time the contents are being generated. For directories add a findentry callback to the vector and modify hypfs_get_entry_rel() to use it. Signed-off-by: Juergen Gross Reviewed-by: Jan Beulich --- V2: - add a dummy findentry function (Jan Beulich) - add const to findentry dir parameter - don't move DIRENTRY_SIZE() to hypfs.h (Jan Beulich) - split dyndir support off into new patch Signed-off-by: Juergen Gross --- xen/common/hypfs.c | 100 ++++++++++++++++++++++++++-------------- xen/include/xen/hypfs.h | 25 +++++++++- 2 files changed, 89 insertions(+), 36 deletions(-) diff --git a/xen/common/hypfs.c b/xen/common/hypfs.c index fdfd0f764a..83c5cacdca 100644 --- a/xen/common/hypfs.c +++ b/xen/common/hypfs.c @@ -27,22 +27,32 @@ CHECK_hypfs_dirlistentry; const struct hypfs_funcs hypfs_dir_funcs = { .read = hypfs_read_dir, .write = hypfs_write_deny, + .getsize = hypfs_getsize, + .findentry = hypfs_dir_findentry, }; const struct hypfs_funcs hypfs_leaf_ro_funcs = { .read = hypfs_read_leaf, .write = hypfs_write_deny, + .getsize = hypfs_getsize, + .findentry = hypfs_leaf_findentry, }; const struct hypfs_funcs hypfs_leaf_wr_funcs = { .read = hypfs_read_leaf, .write = hypfs_write_leaf, + .getsize = hypfs_getsize, + .findentry = hypfs_leaf_findentry, }; const struct hypfs_funcs hypfs_bool_wr_funcs = { .read = hypfs_read_leaf, .write = hypfs_write_bool, + .getsize = hypfs_getsize, + .findentry = hypfs_leaf_findentry, }; const struct hypfs_funcs hypfs_custom_wr_funcs = { .read = hypfs_read_leaf, .write = hypfs_write_custom, + .getsize = hypfs_getsize, + .findentry = hypfs_leaf_findentry, }; static DEFINE_RWLOCK(hypfs_lock); @@ -97,6 +107,8 @@ static int add_entry(struct hypfs_entry_dir *parent, struct hypfs_entry *new) ASSERT(new->funcs->read); ASSERT(new->funcs->write); + ASSERT(new->funcs->getsize); + ASSERT(new->funcs->findentry); hypfs_write_lock(); @@ -176,15 +188,41 @@ static int hypfs_get_path_user(char *buf, return 0; } +struct hypfs_entry *hypfs_leaf_findentry(const struct hypfs_entry_dir *dir, + const char *name, + unsigned int name_len) +{ + return ERR_PTR(-ENOENT); +} + +struct hypfs_entry *hypfs_dir_findentry(const struct hypfs_entry_dir *dir, + const char *name, + unsigned int name_len) +{ + struct hypfs_entry *entry; + + list_for_each_entry ( entry, &dir->dirlist, list ) + { + int cmp = strncmp(name, entry->name, name_len); + + if ( cmp < 0 ) + return ERR_PTR(-ENOENT); + + if ( !cmp && strlen(entry->name) == name_len ) + return entry; + } + + return ERR_PTR(-ENOENT); +} + static struct hypfs_entry *hypfs_get_entry_rel(struct hypfs_entry_dir *dir, const char *path) { const char *end; struct hypfs_entry *entry; unsigned int name_len; - bool again = true; - while ( again ) + for ( ; ; ) { if ( dir->e.type != XEN_HYPFS_TYPE_DIR ) return ERR_PTR(-ENOENT); @@ -197,28 +235,12 @@ static struct hypfs_entry *hypfs_get_entry_rel(struct hypfs_entry_dir *dir, end = strchr(path, '\0'); name_len = end - path; - again = false; + entry = dir->e.funcs->findentry(dir, path, name_len); + if ( IS_ERR(entry) || !*end ) + return entry; - list_for_each_entry ( entry, &dir->dirlist, list ) - { - int cmp = strncmp(path, entry->name, name_len); - struct hypfs_entry_dir *d = container_of(entry, - struct hypfs_entry_dir, e); - - if ( cmp < 0 ) - return ERR_PTR(-ENOENT); - if ( !cmp && strlen(entry->name) == name_len ) - { - if ( !*end ) - return entry; - - again = true; - dir = d; - path = end + 1; - - break; - } - } + path = end + 1; + dir = container_of(entry, struct hypfs_entry_dir, e); } return ERR_PTR(-ENOENT); @@ -232,12 +254,17 @@ static struct hypfs_entry *hypfs_get_entry(const char *path) return hypfs_get_entry_rel(&hypfs_root, path + 1); } +unsigned int hypfs_getsize(const struct hypfs_entry *entry) +{ + return entry->size; +} + int hypfs_read_dir(const struct hypfs_entry *entry, XEN_GUEST_HANDLE_PARAM(void) uaddr) { const struct hypfs_entry_dir *d; const struct hypfs_entry *e; - unsigned int size = entry->size; + unsigned int size = entry->funcs->getsize(entry); ASSERT(this_cpu(hypfs_locked) != hypfs_unlocked); @@ -252,7 +279,7 @@ int hypfs_read_dir(const struct hypfs_entry *entry, direntry.e.pad = 0; direntry.e.type = e->type; direntry.e.encoding = e->encoding; - direntry.e.content_len = e->size; + direntry.e.content_len = e->funcs->getsize(e); direntry.e.max_write_len = e->max_size; direntry.off_next = list_is_last(&e->list, &d->dirlist) ? 0 : e_len; if ( copy_to_guest(uaddr, &direntry, 1) ) @@ -275,18 +302,20 @@ int hypfs_read_leaf(const struct hypfs_entry *entry, XEN_GUEST_HANDLE_PARAM(void) uaddr) { const struct hypfs_entry_leaf *l; + unsigned int size = entry->funcs->getsize(entry); ASSERT(this_cpu(hypfs_locked) != hypfs_unlocked); l = container_of(entry, const struct hypfs_entry_leaf, e); - return copy_to_guest(uaddr, l->u.content, entry->size) ? -EFAULT: 0; + return copy_to_guest(uaddr, l->u.content, size) ? -EFAULT : 0; } static int hypfs_read(const struct hypfs_entry *entry, XEN_GUEST_HANDLE_PARAM(void) uaddr, unsigned long ulen) { struct xen_hypfs_direntry e; + unsigned int size = entry->funcs->getsize(entry); long ret = -EINVAL; if ( ulen < sizeof(e) ) @@ -295,7 +324,7 @@ static int hypfs_read(const struct hypfs_entry *entry, e.pad = 0; e.type = entry->type; e.encoding = entry->encoding; - e.content_len = entry->size; + e.content_len = size; e.max_write_len = entry->max_size; ret = -EFAULT; @@ -303,7 +332,7 @@ static int hypfs_read(const struct hypfs_entry *entry, goto out; ret = -ENOBUFS; - if ( ulen < entry->size + sizeof(e) ) + if ( ulen < size + sizeof(e) ) goto out; guest_handle_add_offset(uaddr, sizeof(e)); @@ -319,15 +348,16 @@ int hypfs_write_leaf(struct hypfs_entry_leaf *leaf, { char *buf; int ret; + struct hypfs_entry *e = &leaf->e; ASSERT(this_cpu(hypfs_locked) == hypfs_write_locked); ASSERT(leaf->e.max_size); - if ( ulen > leaf->e.max_size ) + if ( ulen > e->max_size ) return -ENOSPC; - if ( leaf->e.type != XEN_HYPFS_TYPE_STRING && - leaf->e.type != XEN_HYPFS_TYPE_BLOB && ulen != leaf->e.size ) + if ( e->type != XEN_HYPFS_TYPE_STRING && + e->type != XEN_HYPFS_TYPE_BLOB && ulen != e->funcs->getsize(e) ) return -EDOM; buf = xmalloc_array(char, ulen); @@ -339,14 +369,14 @@ int hypfs_write_leaf(struct hypfs_entry_leaf *leaf, goto out; ret = -EINVAL; - if ( leaf->e.type == XEN_HYPFS_TYPE_STRING && - leaf->e.encoding == XEN_HYPFS_ENC_PLAIN && + if ( e->type == XEN_HYPFS_TYPE_STRING && + e->encoding == XEN_HYPFS_ENC_PLAIN && memchr(buf, 0, ulen) != (buf + ulen - 1) ) goto out; ret = 0; memcpy(leaf->u.write_ptr, buf, ulen); - leaf->e.size = ulen; + e->size = ulen; out: xfree(buf); @@ -360,7 +390,7 @@ int hypfs_write_bool(struct hypfs_entry_leaf *leaf, ASSERT(this_cpu(hypfs_locked) == hypfs_write_locked); ASSERT(leaf->e.type == XEN_HYPFS_TYPE_BOOL && - leaf->e.size == sizeof(bool) && + leaf->e.funcs->getsize(&leaf->e) == sizeof(bool) && leaf->e.max_size == sizeof(bool) ); if ( ulen != leaf->e.max_size ) diff --git a/xen/include/xen/hypfs.h b/xen/include/xen/hypfs.h index 25fdf3ead7..53f50772b4 100644 --- a/xen/include/xen/hypfs.h +++ b/xen/include/xen/hypfs.h @@ -2,17 +2,21 @@ #define __XEN_HYPFS_H__ #ifdef CONFIG_HYPFS +#include #include #include #include struct hypfs_entry_leaf; +struct hypfs_entry_dir; struct hypfs_entry; /* * Per-node callbacks: * - * The callbacks are always called with the hypfs lock held. + * The callbacks are always called with the hypfs lock held. In case multiple + * callbacks are called for a single operation the lock is held across all + * those callbacks. * * The read() callback is used to return the contents of a node (either * directory or leaf). It is NOT used to get directory entries during traversal @@ -20,12 +24,24 @@ struct hypfs_entry; * * The write() callback is used to modify the contents of a node. Writing * directories is not supported (this means all nodes are added at boot time). + * + * getsize() is called in two cases: + * - when reading a node (directory or leaf) for filling in the size of the + * node into the returned direntry + * - when reading a directory for each node in this directory + * + * findentry() is called for traversing a path from the root node to a node + * for all nodes on that path excluding the final node (so for looking up + * "/a/b/c" findentry() will be called for "/", "/a", and "/a/b"). */ struct hypfs_funcs { int (*read)(const struct hypfs_entry *entry, XEN_GUEST_HANDLE_PARAM(void) uaddr); int (*write)(struct hypfs_entry_leaf *leaf, XEN_GUEST_HANDLE_PARAM(void) uaddr, unsigned int ulen); + unsigned int (*getsize)(const struct hypfs_entry *entry); + struct hypfs_entry *(*findentry)(const struct hypfs_entry_dir *dir, + const char *name, unsigned int name_len); }; extern const struct hypfs_funcs hypfs_dir_funcs; @@ -145,6 +161,13 @@ int hypfs_write_bool(struct hypfs_entry_leaf *leaf, XEN_GUEST_HANDLE_PARAM(void) uaddr, unsigned int ulen); int hypfs_write_custom(struct hypfs_entry_leaf *leaf, XEN_GUEST_HANDLE_PARAM(void) uaddr, unsigned int ulen); +unsigned int hypfs_getsize(const struct hypfs_entry *entry); +struct hypfs_entry *hypfs_leaf_findentry(const struct hypfs_entry_dir *dir, + const char *name, + unsigned int name_len); +struct hypfs_entry *hypfs_dir_findentry(const struct hypfs_entry_dir *dir, + const char *name, + unsigned int name_len); #endif #endif /* __XEN_HYPFS_H__ */ From patchwork Tue Dec 1 08:21:23 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: =?utf-8?b?SsO8cmdlbiBHcm/Dnw==?= X-Patchwork-Id: 11942237 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-18.7 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER, INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED, USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 53779C64E8A for ; Tue, 1 Dec 2020 08:22:21 +0000 (UTC) Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id D1272206E3 for ; Tue, 1 Dec 2020 08:22:20 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (1024-bit key) header.d=suse.com header.i=@suse.com header.b="A0YHXj4E" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org D1272206E3 Authentication-Results: mail.kernel.org; dmarc=fail (p=quarantine dis=none) header.from=suse.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=xen-devel-bounces@lists.xenproject.org Received: from list by lists.xenproject.org with outflank-mailman.41659.75084 (Exim 4.92) (envelope-from ) id 1kk0vK-0005dp-2r; Tue, 01 Dec 2020 08:22:10 +0000 X-Outflank-Mailman: Message body and most headers restored to incoming version Received: by outflank-mailman (output) from mailman id 41659.75084; Tue, 01 Dec 2020 08:22:09 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1kk0vJ-0005cz-Be; Tue, 01 Dec 2020 08:22:09 +0000 Received: by outflank-mailman (input) for mailman id 41659; Tue, 01 Dec 2020 08:22:07 +0000 Received: from us1-rack-iad1.inumbo.com ([172.99.69.81]) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1kk0vH-0004Uj-Jh for xen-devel@lists.xenproject.org; Tue, 01 Dec 2020 08:22:07 +0000 Received: from mx2.suse.de (unknown [195.135.220.15]) by us1-rack-iad1.inumbo.com (Halon) with ESMTPS id 544f183e-d5f3-4c30-9d2f-26ec576f0792; Tue, 01 Dec 2020 08:21:34 +0000 (UTC) Received: from relay2.suse.de (unknown [195.135.221.27]) by mx2.suse.de (Postfix) with ESMTP id 70019AF26; Tue, 1 Dec 2020 08:21:33 +0000 (UTC) X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: 544f183e-d5f3-4c30-9d2f-26ec576f0792 X-Virus-Scanned: by amavisd-new at test-mx.suse.de DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1; t=1606810893; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc: mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=5TUB0753mqlMUzFRfIKqWdQ4pi4TEHdAafXYRsY3+ps=; b=A0YHXj4E2/lcDG8u30FMXxMKHWlPnkqX5fXhttVGTKHbTrOBq0ysiVvNfNaXhMEnxZYSWQ WIDIxYZnKNEGbvmh3Q/gTg/InCxWZ3nGq39Z4NnoNGX8nmZxUssvDqtA8JiFhyF0fFQ/9G 4/WqEKfDD4eK6+TeEMMWkvr2lM9Lc/s= From: Juergen Gross To: xen-devel@lists.xenproject.org Cc: Juergen Gross , Andrew Cooper , George Dunlap , Ian Jackson , Jan Beulich , Julien Grall , Stefano Stabellini , Wei Liu Subject: [PATCH v2 12/17] xen/hypfs: add new enter() and exit() per node callbacks Date: Tue, 1 Dec 2020 09:21:23 +0100 Message-Id: <20201201082128.15239-13-jgross@suse.com> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20201201082128.15239-1-jgross@suse.com> References: <20201201082128.15239-1-jgross@suse.com> MIME-Version: 1.0 In order to better support resource allocation and locking for dynamic hypfs nodes add enter() and exit() callbacks to struct hypfs_funcs. The enter() callback is called when entering a node during hypfs user actions (traversing, reading or writing it), while the exit() callback is called when leaving a node (accessing another node at the same or a higher directory level, or when returning to the user). For avoiding recursion this requires a parent pointer in each node. Let the enter() callback return the entry address which is stored as the last accessed node in order to be able to use a template entry for that purpose in case of dynamic entries. Signed-off-by: Juergen Gross --- V2: - new patch --- xen/common/hypfs.c | 79 +++++++++++++++++++++++++++++++++++++++++ xen/include/xen/hypfs.h | 5 +++ 2 files changed, 84 insertions(+) diff --git a/xen/common/hypfs.c b/xen/common/hypfs.c index 83c5cacdca..e5adc9defe 100644 --- a/xen/common/hypfs.c +++ b/xen/common/hypfs.c @@ -25,30 +25,40 @@ CHECK_hypfs_dirlistentry; ROUNDUP((name_len) + 1, alignof(struct xen_hypfs_direntry))) const struct hypfs_funcs hypfs_dir_funcs = { + .enter = hypfs_node_enter, + .exit = hypfs_node_exit, .read = hypfs_read_dir, .write = hypfs_write_deny, .getsize = hypfs_getsize, .findentry = hypfs_dir_findentry, }; const struct hypfs_funcs hypfs_leaf_ro_funcs = { + .enter = hypfs_node_enter, + .exit = hypfs_node_exit, .read = hypfs_read_leaf, .write = hypfs_write_deny, .getsize = hypfs_getsize, .findentry = hypfs_leaf_findentry, }; const struct hypfs_funcs hypfs_leaf_wr_funcs = { + .enter = hypfs_node_enter, + .exit = hypfs_node_exit, .read = hypfs_read_leaf, .write = hypfs_write_leaf, .getsize = hypfs_getsize, .findentry = hypfs_leaf_findentry, }; const struct hypfs_funcs hypfs_bool_wr_funcs = { + .enter = hypfs_node_enter, + .exit = hypfs_node_exit, .read = hypfs_read_leaf, .write = hypfs_write_bool, .getsize = hypfs_getsize, .findentry = hypfs_leaf_findentry, }; const struct hypfs_funcs hypfs_custom_wr_funcs = { + .enter = hypfs_node_enter, + .exit = hypfs_node_exit, .read = hypfs_read_leaf, .write = hypfs_write_custom, .getsize = hypfs_getsize, @@ -63,6 +73,8 @@ enum hypfs_lock_state { }; static DEFINE_PER_CPU(enum hypfs_lock_state, hypfs_locked); +static DEFINE_PER_CPU(const struct hypfs_entry *, hypfs_last_node_entered); + HYPFS_DIR_INIT(hypfs_root, ""); static void hypfs_read_lock(void) @@ -100,11 +112,58 @@ static void hypfs_unlock(void) } } +const struct hypfs_entry *hypfs_node_enter(const struct hypfs_entry *entry) +{ + return entry; +} + +void hypfs_node_exit(const struct hypfs_entry *entry) +{ +} + +static int node_enter(const struct hypfs_entry *entry) +{ + const struct hypfs_entry **last = &this_cpu(hypfs_last_node_entered); + + entry = entry->funcs->enter(entry); + if ( IS_ERR(entry) ) + return PTR_ERR(entry); + + ASSERT(!*last || *last == entry->parent); + + *last = entry; + + return 0; +} + +static void node_exit(const struct hypfs_entry *entry) +{ + const struct hypfs_entry **last = &this_cpu(hypfs_last_node_entered); + + if ( !*last ) + return; + + ASSERT(*last == entry); + *last = entry->parent; + + entry->funcs->exit(entry); +} + +static void node_exit_all(void) +{ + const struct hypfs_entry **last = &this_cpu(hypfs_last_node_entered); + + while ( *last ) + node_exit(*last); +} + static int add_entry(struct hypfs_entry_dir *parent, struct hypfs_entry *new) { int ret = -ENOENT; struct hypfs_entry *e; + ASSERT(new->funcs->enter); + ASSERT(new->funcs->exit); ASSERT(new->funcs->read); ASSERT(new->funcs->write); ASSERT(new->funcs->getsize); @@ -140,6 +199,7 @@ static int add_entry(struct hypfs_entry_dir *parent, struct hypfs_entry *new) unsigned int sz = strlen(new->name); parent->e.size += DIRENTRY_SIZE(sz); + new->parent = &parent->e; } hypfs_unlock(); @@ -221,6 +281,7 @@ static struct hypfs_entry *hypfs_get_entry_rel(struct hypfs_entry_dir *dir, const char *end; struct hypfs_entry *entry; unsigned int name_len; + int ret; for ( ; ; ) { @@ -235,6 +296,10 @@ static struct hypfs_entry *hypfs_get_entry_rel(struct hypfs_entry_dir *dir, end = strchr(path, '\0'); name_len = end - path; + ret = node_enter(&dir->e); + if ( ret ) + return ERR_PTR(ret); + entry = dir->e.funcs->findentry(dir, path, name_len); if ( IS_ERR(entry) || !*end ) return entry; @@ -265,6 +330,7 @@ int hypfs_read_dir(const struct hypfs_entry *entry, const struct hypfs_entry_dir *d; const struct hypfs_entry *e; unsigned int size = entry->funcs->getsize(entry); + int ret; ASSERT(this_cpu(hypfs_locked) != hypfs_unlocked); @@ -276,12 +342,19 @@ int hypfs_read_dir(const struct hypfs_entry *entry, unsigned int e_namelen = strlen(e->name); unsigned int e_len = DIRENTRY_SIZE(e_namelen); + ret = node_enter(e); + if ( ret ) + return ret; + direntry.e.pad = 0; direntry.e.type = e->type; direntry.e.encoding = e->encoding; direntry.e.content_len = e->funcs->getsize(e); direntry.e.max_write_len = e->max_size; direntry.off_next = list_is_last(&e->list, &d->dirlist) ? 0 : e_len; + + node_exit(e); + if ( copy_to_guest(uaddr, &direntry, 1) ) return -EFAULT; @@ -490,6 +563,10 @@ long do_hypfs_op(unsigned int cmd, goto out; } + ret = node_enter(entry); + if ( ret ) + goto out; + switch ( cmd ) { case XEN_HYPFS_OP_read: @@ -506,6 +583,8 @@ long do_hypfs_op(unsigned int cmd, } out: + node_exit_all(); + hypfs_unlock(); return ret; diff --git a/xen/include/xen/hypfs.h b/xen/include/xen/hypfs.h index 53f50772b4..8d96abd805 100644 --- a/xen/include/xen/hypfs.h +++ b/xen/include/xen/hypfs.h @@ -35,6 +35,8 @@ struct hypfs_entry; * "/a/b/c" findentry() will be called for "/", "/a", and "/a/b"). */ struct hypfs_funcs { + const struct hypfs_entry *(*enter)(const struct hypfs_entry *entry); + void (*exit)(const struct hypfs_entry *entry); int (*read)(const struct hypfs_entry *entry, XEN_GUEST_HANDLE_PARAM(void) uaddr); int (*write)(struct hypfs_entry_leaf *leaf, @@ -56,6 +58,7 @@ struct hypfs_entry { unsigned int size; unsigned int max_size; const char *name; + struct hypfs_entry *parent; struct list_head list; const struct hypfs_funcs *funcs; }; @@ -149,6 +152,8 @@ int hypfs_add_dir(struct hypfs_entry_dir *parent, struct hypfs_entry_dir *dir, bool nofault); int hypfs_add_leaf(struct hypfs_entry_dir *parent, struct hypfs_entry_leaf *leaf, bool nofault); +const struct hypfs_entry *hypfs_node_enter(const struct hypfs_entry *entry); +void hypfs_node_exit(const struct hypfs_entry *entry); int hypfs_read_dir(const struct hypfs_entry *entry, XEN_GUEST_HANDLE_PARAM(void) uaddr); int hypfs_read_leaf(const struct hypfs_entry *entry, From patchwork Tue Dec 1 08:21:24 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: =?utf-8?b?SsO8cmdlbiBHcm/Dnw==?= X-Patchwork-Id: 11942233 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-18.7 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER, INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED, USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 16B57C64E7B for ; Tue, 1 Dec 2020 08:22:16 +0000 (UTC) Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 859B62087C for ; Tue, 1 Dec 2020 08:22:15 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (1024-bit key) header.d=suse.com header.i=@suse.com header.b="jEbgpZ10" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 859B62087C Authentication-Results: mail.kernel.org; dmarc=fail (p=quarantine dis=none) header.from=suse.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=xen-devel-bounces@lists.xenproject.org Received: from list by lists.xenproject.org with outflank-mailman.41652.75061 (Exim 4.92) (envelope-from ) id 1kk0vE-0005Rt-RD; Tue, 01 Dec 2020 08:22:04 +0000 X-Outflank-Mailman: Message body and most headers restored to incoming version Received: by outflank-mailman (output) from mailman id 41652.75061; Tue, 01 Dec 2020 08:22:04 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1kk0vE-0005RM-Dq; Tue, 01 Dec 2020 08:22:04 +0000 Received: by outflank-mailman (input) for mailman id 41652; Tue, 01 Dec 2020 08:22:02 +0000 Received: from us1-rack-iad1.inumbo.com ([172.99.69.81]) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1kk0vC-0004Uj-Je for xen-devel@lists.xenproject.org; Tue, 01 Dec 2020 08:22:02 +0000 Received: from mx2.suse.de (unknown [195.135.220.15]) by us1-rack-iad1.inumbo.com (Halon) with ESMTPS id 140159a2-5e1f-4a70-a062-b217f06cd2c0; Tue, 01 Dec 2020 08:21:35 +0000 (UTC) Received: from relay2.suse.de (unknown [195.135.221.27]) by mx2.suse.de (Postfix) with ESMTP id AA3E8AF30; Tue, 1 Dec 2020 08:21:33 +0000 (UTC) X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: 140159a2-5e1f-4a70-a062-b217f06cd2c0 X-Virus-Scanned: by amavisd-new at test-mx.suse.de DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1; t=1606810893; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc: mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=FZxV6jsf+IHXWLLNyxfWYa8r8oAvJLZTojatFVH5etE=; b=jEbgpZ10S3Xr01s/3ASowbRpBT+X/cyV0c8W4GNBCHOn59lgdMPBAK10SLlkYdJMk5c4Pt MIsa02484ifRClmscqGm1wvmVdD6BM4+kRB8aZjoRbRbJIa5yZ+Oex3LFRIXMW/GtTjeeT +pSk1w+TBHplJ7DyxCvsLN/Y/vZS650= From: Juergen Gross To: xen-devel@lists.xenproject.org Cc: Juergen Gross , Andrew Cooper , George Dunlap , Ian Jackson , Jan Beulich , Julien Grall , Stefano Stabellini , Wei Liu Subject: [PATCH v2 13/17] xen/hypfs: support dynamic hypfs nodes Date: Tue, 1 Dec 2020 09:21:24 +0100 Message-Id: <20201201082128.15239-14-jgross@suse.com> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20201201082128.15239-1-jgross@suse.com> References: <20201201082128.15239-1-jgross@suse.com> MIME-Version: 1.0 Add a HYPFS_VARDIR_INIT() macro for initializing such a directory statically, taking a struct hypfs_funcs pointer as parameter additional to those of HYPFS_DIR_INIT(). Modify HYPFS_VARSIZE_INIT() to take the function vector pointer as an additional parameter as this will be needed for dynamical entries. For being able to let the generic hypfs coding continue to work on normal struct hypfs_entry entities even for dynamical nodes add some infrastructure for allocating a working area for the current hypfs request in order to store needed information for traversing the tree. This area is anchored in a percpu pointer and can be retrieved by any level of the dynamic entries. The normal way to handle allocation and freeing is to allocate the data in the enter() callback of a node and to free it in the related exit() callback. Add a hypfs_add_dyndir() function for adding a dynamic directory template to the tree, which is needed for having the correct reference to its position in hypfs. Signed-off-by: Juergen Gross --- V2: - switch to xzalloc_bytes() in hypfs_alloc_dyndata() (Jan Beulich) - carved out from previous patch - use enter() and exit() callbacks for allocating and freeing dyndata memory - add hypfs_add_dyndir() --- xen/common/hypfs.c | 31 +++++++++++++++++++++++++++++++ xen/include/xen/hypfs.h | 28 ++++++++++++++++++---------- 2 files changed, 49 insertions(+), 10 deletions(-) diff --git a/xen/common/hypfs.c b/xen/common/hypfs.c index e5adc9defe..39448c5409 100644 --- a/xen/common/hypfs.c +++ b/xen/common/hypfs.c @@ -72,6 +72,7 @@ enum hypfs_lock_state { hypfs_write_locked }; static DEFINE_PER_CPU(enum hypfs_lock_state, hypfs_locked); +static DEFINE_PER_CPU(struct hypfs_dyndata *, hypfs_dyndata); static DEFINE_PER_CPU(const struct hypfs_entry *, hypfs_last_node_entered); @@ -157,6 +158,30 @@ static void node_exit_all(void) node_exit(*last); } +void *hypfs_alloc_dyndata(unsigned long size) +{ + unsigned int cpu = smp_processor_id(); + + ASSERT(per_cpu(hypfs_locked, cpu) != hypfs_unlocked); + ASSERT(per_cpu(hypfs_dyndata, cpu) == NULL); + + per_cpu(hypfs_dyndata, cpu) = xzalloc_bytes(size); + + return per_cpu(hypfs_dyndata, cpu); +} + +void *hypfs_get_dyndata(void) +{ + ASSERT(this_cpu(hypfs_dyndata)); + + return this_cpu(hypfs_dyndata); +} + +void hypfs_free_dyndata(void) +{ + XFREE(this_cpu(hypfs_dyndata)); +} + static int add_entry(struct hypfs_entry_dir *parent, struct hypfs_entry *new) { int ret = -ENOENT; @@ -218,6 +243,12 @@ int hypfs_add_dir(struct hypfs_entry_dir *parent, return ret; } +void hypfs_add_dyndir(struct hypfs_entry_dir *parent, + struct hypfs_entry_dir *template) +{ + template->e.parent = &parent->e; +} + int hypfs_add_leaf(struct hypfs_entry_dir *parent, struct hypfs_entry_leaf *leaf, bool nofault) { diff --git a/xen/include/xen/hypfs.h b/xen/include/xen/hypfs.h index 8d96abd805..be8d6c508a 100644 --- a/xen/include/xen/hypfs.h +++ b/xen/include/xen/hypfs.h @@ -76,7 +76,7 @@ struct hypfs_entry_dir { struct list_head dirlist; }; -#define HYPFS_DIR_INIT(var, nam) \ +#define HYPFS_VARDIR_INIT(var, nam, fn) \ struct hypfs_entry_dir __read_mostly var = { \ .e.type = XEN_HYPFS_TYPE_DIR, \ .e.encoding = XEN_HYPFS_ENC_PLAIN, \ @@ -84,22 +84,25 @@ struct hypfs_entry_dir { .e.size = 0, \ .e.max_size = 0, \ .e.list = LIST_HEAD_INIT(var.e.list), \ - .e.funcs = &hypfs_dir_funcs, \ + .e.funcs = (fn), \ .dirlist = LIST_HEAD_INIT(var.dirlist), \ } -#define HYPFS_VARSIZE_INIT(var, typ, nam, msz) \ - struct hypfs_entry_leaf __read_mostly var = { \ - .e.type = (typ), \ - .e.encoding = XEN_HYPFS_ENC_PLAIN, \ - .e.name = (nam), \ - .e.max_size = (msz), \ - .e.funcs = &hypfs_leaf_ro_funcs, \ +#define HYPFS_DIR_INIT(var, nam) \ + HYPFS_VARDIR_INIT(var, nam, &hypfs_dir_funcs) + +#define HYPFS_VARSIZE_INIT(var, typ, nam, msz, fn) \ + struct hypfs_entry_leaf __read_mostly var = { \ + .e.type = (typ), \ + .e.encoding = XEN_HYPFS_ENC_PLAIN, \ + .e.name = (nam), \ + .e.max_size = (msz), \ + .e.funcs = (fn), \ } /* Content and size need to be set via hypfs_string_set_reference(). */ #define HYPFS_STRING_INIT(var, nam) \ - HYPFS_VARSIZE_INIT(var, XEN_HYPFS_TYPE_STRING, nam, 0) + HYPFS_VARSIZE_INIT(var, XEN_HYPFS_TYPE_STRING, nam, 0, &hypfs_leaf_ro_funcs) /* * Set content and size of a XEN_HYPFS_TYPE_STRING node. The node will point @@ -150,6 +153,8 @@ extern struct hypfs_entry_dir hypfs_root; int hypfs_add_dir(struct hypfs_entry_dir *parent, struct hypfs_entry_dir *dir, bool nofault); +void hypfs_add_dyndir(struct hypfs_entry_dir *parent, + struct hypfs_entry_dir *template); int hypfs_add_leaf(struct hypfs_entry_dir *parent, struct hypfs_entry_leaf *leaf, bool nofault); const struct hypfs_entry *hypfs_node_enter(const struct hypfs_entry *entry); @@ -173,6 +178,9 @@ struct hypfs_entry *hypfs_leaf_findentry(const struct hypfs_entry_dir *dir, struct hypfs_entry *hypfs_dir_findentry(const struct hypfs_entry_dir *dir, const char *name, unsigned int name_len); +void *hypfs_alloc_dyndata(unsigned long size); +void *hypfs_get_dyndata(void); +void hypfs_free_dyndata(void); #endif #endif /* __XEN_HYPFS_H__ */ From patchwork Tue Dec 1 08:21:25 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: =?utf-8?b?SsO8cmdlbiBHcm/Dnw==?= X-Patchwork-Id: 11942239 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-18.7 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER, INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED, USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id A53E1C64E7A for ; Tue, 1 Dec 2020 08:22:22 +0000 (UTC) Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 48BB520659 for ; Tue, 1 Dec 2020 08:22:22 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (1024-bit key) header.d=suse.com header.i=@suse.com header.b="q7fVue6+" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 48BB520659 Authentication-Results: mail.kernel.org; dmarc=fail (p=quarantine dis=none) header.from=suse.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=xen-devel-bounces@lists.xenproject.org Received: from list by lists.xenproject.org with outflank-mailman.41669.75101 (Exim 4.92) (envelope-from ) id 1kk0vO-0005of-FO; Tue, 01 Dec 2020 08:22:14 +0000 X-Outflank-Mailman: Message body and most headers restored to incoming version Received: by outflank-mailman (output) from mailman id 41669.75101; Tue, 01 Dec 2020 08:22:14 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1kk0vO-0005oT-A2; Tue, 01 Dec 2020 08:22:14 +0000 Received: by outflank-mailman (input) for mailman id 41669; Tue, 01 Dec 2020 08:22:12 +0000 Received: from us1-rack-iad1.inumbo.com ([172.99.69.81]) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1kk0vM-0004Uj-Jt for xen-devel@lists.xenproject.org; Tue, 01 Dec 2020 08:22:12 +0000 Received: from mx2.suse.de (unknown [195.135.220.15]) by us1-rack-iad1.inumbo.com (Halon) with ESMTPS id c3e75dc9-6b8c-4948-9e33-bfc041c331ed; Tue, 01 Dec 2020 08:21:35 +0000 (UTC) Received: from relay2.suse.de (unknown [195.135.221.27]) by mx2.suse.de (Postfix) with ESMTP id E7BA2AC2F; Tue, 1 Dec 2020 08:21:33 +0000 (UTC) X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: c3e75dc9-6b8c-4948-9e33-bfc041c331ed X-Virus-Scanned: by amavisd-new at test-mx.suse.de DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1; t=1606810894; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc: mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=7nEfxMc4Y111J56Pbtn4xCo86ZYNHh7NsCYZakRnxAw=; b=q7fVue6+a3LnDyFS4uwGeOnENuWlrCODSQKGhqEjyhoQOU0imZti8xKGiJxIgP4ZPH0XGq NcJcyyWri44MTb8Bd6uW73F0Exdla3Y1bf3Kz69CES6KlvO/AeUxcAaeshGW8aFjl48Syi NJNsGCjs8OelOTH3cwKbfq2LzCG5/UE= From: Juergen Gross To: xen-devel@lists.xenproject.org Cc: Juergen Gross , Andrew Cooper , George Dunlap , Ian Jackson , Jan Beulich , Julien Grall , Stefano Stabellini , Wei Liu Subject: [PATCH v2 14/17] xen/hypfs: add support for id-based dynamic directories Date: Tue, 1 Dec 2020 09:21:25 +0100 Message-Id: <20201201082128.15239-15-jgross@suse.com> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20201201082128.15239-1-jgross@suse.com> References: <20201201082128.15239-1-jgross@suse.com> MIME-Version: 1.0 Add some helpers to hypfs.c to support dynamic directories with a numerical id as name. The dynamic directory is based on a template specified by the user allowing to use specific access functions and having a predefined set of entries in the directory. Signed-off-by: Juergen Gross --- V2: - use macro for length of entry name (Jan Beulich) - const attributes (Jan Beulich) - use template name as format string (Jan Beulich) - add hypfs_dynid_entry_size() helper (Jan Beulich) - expect dyndir data having been allocated by enter() callback --- xen/common/hypfs.c | 75 +++++++++++++++++++++++++++++++++++++++++ xen/include/xen/hypfs.h | 17 ++++++++++ 2 files changed, 92 insertions(+) diff --git a/xen/common/hypfs.c b/xen/common/hypfs.c index 39448c5409..1819b26925 100644 --- a/xen/common/hypfs.c +++ b/xen/common/hypfs.c @@ -355,6 +355,81 @@ unsigned int hypfs_getsize(const struct hypfs_entry *entry) return entry->size; } +int hypfs_read_dyndir_id_entry(const struct hypfs_entry_dir *template, + unsigned int id, bool is_last, + XEN_GUEST_HANDLE_PARAM(void) *uaddr) +{ + struct xen_hypfs_dirlistentry direntry; + char name[HYPFS_DYNDIR_ID_NAMELEN]; + unsigned int e_namelen, e_len; + + e_namelen = snprintf(name, sizeof(name), template->e.name, id); + e_len = DIRENTRY_SIZE(e_namelen); + direntry.e.pad = 0; + direntry.e.type = template->e.type; + direntry.e.encoding = template->e.encoding; + direntry.e.content_len = template->e.funcs->getsize(&template->e); + direntry.e.max_write_len = template->e.max_size; + direntry.off_next = is_last ? 0 : e_len; + if ( copy_to_guest(*uaddr, &direntry, 1) ) + return -EFAULT; + if ( copy_to_guest_offset(*uaddr, DIRENTRY_NAME_OFF, name, + e_namelen + 1) ) + return -EFAULT; + + guest_handle_add_offset(*uaddr, e_len); + + return 0; +} + +static struct hypfs_entry *hypfs_dyndir_findentry( + const struct hypfs_entry_dir *dir, const char *name, unsigned int name_len) +{ + const struct hypfs_dyndir_id *data; + + data = hypfs_get_dyndata(); + + /* Use template with original findentry function. */ + return data->template->e.funcs->findentry(data->template, name, name_len); +} + +static int hypfs_read_dyndir(const struct hypfs_entry *entry, + XEN_GUEST_HANDLE_PARAM(void) uaddr) +{ + const struct hypfs_dyndir_id *data; + + data = hypfs_get_dyndata(); + + /* Use template with original read function. */ + return data->template->e.funcs->read(&data->template->e, uaddr); +} + +struct hypfs_entry *hypfs_gen_dyndir_entry_id( + const struct hypfs_entry_dir *template, unsigned int id) +{ + struct hypfs_dyndir_id *data; + + data = hypfs_get_dyndata(); + + data->template = template; + data->id = id; + snprintf(data->name, sizeof(data->name), template->e.name, id); + data->dir = *template; + data->dir.e.name = data->name; + data->dir.e.funcs = &data->funcs; + data->funcs = *template->e.funcs; + data->funcs.findentry = hypfs_dyndir_findentry; + data->funcs.read = hypfs_read_dyndir; + + return &data->dir.e; +} + +unsigned int hypfs_dynid_entry_size(const struct hypfs_entry *template, + unsigned int id) +{ + return DIRENTRY_SIZE(snprintf(NULL, 0, template->name, id)); +} + int hypfs_read_dir(const struct hypfs_entry *entry, XEN_GUEST_HANDLE_PARAM(void) uaddr) { diff --git a/xen/include/xen/hypfs.h b/xen/include/xen/hypfs.h index be8d6c508a..1782c50b30 100644 --- a/xen/include/xen/hypfs.h +++ b/xen/include/xen/hypfs.h @@ -76,6 +76,16 @@ struct hypfs_entry_dir { struct list_head dirlist; }; +struct hypfs_dyndir_id { + struct hypfs_entry_dir dir; /* Modified copy of template. */ + struct hypfs_funcs funcs; /* Dynamic functions. */ + const struct hypfs_entry_dir *template; /* Template used. */ +#define HYPFS_DYNDIR_ID_NAMELEN 12 + char name[HYPFS_DYNDIR_ID_NAMELEN]; /* Name of hypfs entry. */ + + unsigned int id; /* Numerical id. */ +}; + #define HYPFS_VARDIR_INIT(var, nam, fn) \ struct hypfs_entry_dir __read_mostly var = { \ .e.type = XEN_HYPFS_TYPE_DIR, \ @@ -181,6 +191,13 @@ struct hypfs_entry *hypfs_dir_findentry(const struct hypfs_entry_dir *dir, void *hypfs_alloc_dyndata(unsigned long size); void *hypfs_get_dyndata(void); void hypfs_free_dyndata(void); +int hypfs_read_dyndir_id_entry(const struct hypfs_entry_dir *template, + unsigned int id, bool is_last, + XEN_GUEST_HANDLE_PARAM(void) *uaddr); +struct hypfs_entry *hypfs_gen_dyndir_entry_id( + const struct hypfs_entry_dir *template, unsigned int id); +unsigned int hypfs_dynid_entry_size(const struct hypfs_entry *template, + unsigned int id); #endif #endif /* __XEN_HYPFS_H__ */ From patchwork Tue Dec 1 08:21:26 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: =?utf-8?b?SsO8cmdlbiBHcm/Dnw==?= X-Patchwork-Id: 11942241 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-18.7 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER, INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED, USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id B779FC64E7A for ; Tue, 1 Dec 2020 08:22:29 +0000 (UTC) Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 4450820659 for ; Tue, 1 Dec 2020 08:22:29 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (1024-bit key) header.d=suse.com header.i=@suse.com header.b="si2eglK8" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 4450820659 Authentication-Results: mail.kernel.org; dmarc=fail (p=quarantine dis=none) header.from=suse.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=xen-devel-bounces@lists.xenproject.org Received: from list by lists.xenproject.org with outflank-mailman.41677.75114 (Exim 4.92) (envelope-from ) id 1kk0vT-0005yk-9Q; Tue, 01 Dec 2020 08:22:19 +0000 X-Outflank-Mailman: Message body and most headers restored to incoming version Received: by outflank-mailman (output) from mailman id 41677.75114; Tue, 01 Dec 2020 08:22:19 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1kk0vT-0005yV-38; Tue, 01 Dec 2020 08:22:19 +0000 Received: by outflank-mailman (input) for mailman id 41677; Tue, 01 Dec 2020 08:22:17 +0000 Received: from us1-rack-iad1.inumbo.com ([172.99.69.81]) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1kk0vR-0004Uj-KF for xen-devel@lists.xenproject.org; Tue, 01 Dec 2020 08:22:17 +0000 Received: from mx2.suse.de (unknown [195.135.220.15]) by us1-rack-iad1.inumbo.com (Halon) with ESMTPS id 8c0bbf43-36d4-4383-b29a-fee1a0eed84c; Tue, 01 Dec 2020 08:21:35 +0000 (UTC) Received: from relay2.suse.de (unknown [195.135.221.27]) by mx2.suse.de (Postfix) with ESMTP id 37707AD71; Tue, 1 Dec 2020 08:21:34 +0000 (UTC) X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: 8c0bbf43-36d4-4383-b29a-fee1a0eed84c X-Virus-Scanned: by amavisd-new at test-mx.suse.de DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1; t=1606810894; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc: mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=HM46dWJ+2EckZVs46afJ02xSZlPII/ZflhbtJ/soVzY=; b=si2eglK8gFZfvlwn7DwLnIfjPwj443YIA2y0+2TssmsqgTD1VpUZvitNvDWOvPhJZbQS6V H6lMLFAHhyc3EJFWNXxzU9nCuJnsn3hnRCrUF1EVh1mxqA/k9p3Ppo39CylLKOsK5gOPwX ZRK+gWBGuZMp8brKkizNXEPDGrUqBxI= From: Juergen Gross To: xen-devel@lists.xenproject.org Cc: Juergen Gross , Andrew Cooper , George Dunlap , Ian Jackson , Jan Beulich , Julien Grall , Stefano Stabellini , Wei Liu , Dario Faggioli Subject: [PATCH v2 15/17] xen/cpupool: add cpupool directories Date: Tue, 1 Dec 2020 09:21:26 +0100 Message-Id: <20201201082128.15239-16-jgross@suse.com> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20201201082128.15239-1-jgross@suse.com> References: <20201201082128.15239-1-jgross@suse.com> MIME-Version: 1.0 Add /cpupool/ directories to hypfs. Those are completely dynamic, so the related hypfs access functions need to be implemented. Signed-off-by: Juergen Gross --- V2: - added const (Jan Beulich) - call hypfs_add_dir() in helper (Dario Faggioli) - switch locking to enter/exit callbacks --- docs/misc/hypfs-paths.pandoc | 9 +++ xen/common/sched/cpupool.c | 122 +++++++++++++++++++++++++++++++++++ 2 files changed, 131 insertions(+) diff --git a/docs/misc/hypfs-paths.pandoc b/docs/misc/hypfs-paths.pandoc index 6c7b2f7ee3..aaca1cdf92 100644 --- a/docs/misc/hypfs-paths.pandoc +++ b/docs/misc/hypfs-paths.pandoc @@ -175,6 +175,15 @@ The major version of Xen. The minor version of Xen. +#### /cpupool/ + +A directory of all current cpupools. + +#### /cpupool/*/ + +The individual cpupools. Each entry is a directory with the name being the +cpupool-id (e.g. /cpupool/0/). + #### /params/ A directory of runtime parameters. diff --git a/xen/common/sched/cpupool.c b/xen/common/sched/cpupool.c index 0db7d77219..3e17fdf95b 100644 --- a/xen/common/sched/cpupool.c +++ b/xen/common/sched/cpupool.c @@ -13,6 +13,8 @@ #include #include +#include +#include #include #include #include @@ -33,6 +35,7 @@ static int cpupool_moving_cpu = -1; static struct cpupool *cpupool_cpu_moving = NULL; static cpumask_t cpupool_locked_cpus; +/* This lock nests inside sysctl or hypfs lock. */ static DEFINE_SPINLOCK(cpupool_lock); static enum sched_gran __read_mostly opt_sched_granularity = SCHED_GRAN_cpu; @@ -1003,12 +1006,131 @@ static struct notifier_block cpu_nfb = { .notifier_call = cpu_callback }; +#ifdef CONFIG_HYPFS +static const struct hypfs_entry *cpupool_pooldir_enter( + const struct hypfs_entry *entry); + +static struct hypfs_funcs cpupool_pooldir_funcs = { + .enter = cpupool_pooldir_enter, + .exit = hypfs_node_exit, + .read = hypfs_read_dir, + .write = hypfs_write_deny, + .getsize = hypfs_getsize, + .findentry = hypfs_dir_findentry, +}; + +static HYPFS_VARDIR_INIT(cpupool_pooldir, "%u", &cpupool_pooldir_funcs); + +static const struct hypfs_entry *cpupool_pooldir_enter( + const struct hypfs_entry *entry) +{ + return &cpupool_pooldir.e; +} + +static int cpupool_dir_read(const struct hypfs_entry *entry, + XEN_GUEST_HANDLE_PARAM(void) uaddr) +{ + int ret = 0; + const struct cpupool *c; + unsigned int size = 0; + + list_for_each_entry(c, &cpupool_list, list) + { + size += hypfs_dynid_entry_size(entry, c->cpupool_id); + + ret = hypfs_read_dyndir_id_entry(&cpupool_pooldir, c->cpupool_id, + list_is_last(&c->list, &cpupool_list), + &uaddr); + if ( ret ) + break; + } + + return ret; +} + +static unsigned int cpupool_dir_getsize(const struct hypfs_entry *entry) +{ + const struct cpupool *c; + unsigned int size = 0; + + list_for_each_entry(c, &cpupool_list, list) + size += hypfs_dynid_entry_size(entry, c->cpupool_id); + + return size; +} + +static const struct hypfs_entry *cpupool_dir_enter( + const struct hypfs_entry *entry) +{ + struct hypfs_dyndir_id *data; + + data = hypfs_alloc_dyndata(sizeof(*data)); + if ( !data ) + return ERR_PTR(-ENOMEM); + data->id = CPUPOOLID_NONE; + + spin_lock(&cpupool_lock); + + return entry; +} + +static void cpupool_dir_exit(const struct hypfs_entry *entry) +{ + spin_unlock(&cpupool_lock); + + hypfs_free_dyndata(); +} + +static struct hypfs_entry *cpupool_dir_findentry( + const struct hypfs_entry_dir *dir, const char *name, unsigned int name_len) +{ + unsigned long id; + const char *end; + const struct cpupool *cpupool; + + id = simple_strtoul(name, &end, 10); + if ( end != name + name_len ) + return ERR_PTR(-ENOENT); + + cpupool = __cpupool_find_by_id(id, true); + + if ( !cpupool ) + return ERR_PTR(-ENOENT); + + return hypfs_gen_dyndir_entry_id(&cpupool_pooldir, id); +} + +static struct hypfs_funcs cpupool_dir_funcs = { + .enter = cpupool_dir_enter, + .exit = cpupool_dir_exit, + .read = cpupool_dir_read, + .write = hypfs_write_deny, + .getsize = cpupool_dir_getsize, + .findentry = cpupool_dir_findentry, +}; + +static HYPFS_VARDIR_INIT(cpupool_dir, "cpupool", &cpupool_dir_funcs); + +static void cpupool_hypfs_init(void) +{ + hypfs_add_dir(&hypfs_root, &cpupool_dir, true); + hypfs_add_dyndir(&cpupool_dir, &cpupool_pooldir); +} +#else + +static void cpupool_hypfs_init(void) +{ +} +#endif + static int __init cpupool_init(void) { unsigned int cpu; cpupool_gran_init(); + cpupool_hypfs_init(); + cpupool0 = cpupool_create(0, 0); BUG_ON(IS_ERR(cpupool0)); cpupool_put(cpupool0); From patchwork Tue Dec 1 08:21:27 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: =?utf-8?b?SsO8cmdlbiBHcm/Dnw==?= X-Patchwork-Id: 11942227 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-18.7 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER, INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED, USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id A482EC64E8A for ; Tue, 1 Dec 2020 08:22:07 +0000 (UTC) Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 2A56F206E3 for ; Tue, 1 Dec 2020 08:22:07 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (1024-bit key) header.d=suse.com header.i=@suse.com header.b="VpXkgRai" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 2A56F206E3 Authentication-Results: mail.kernel.org; dmarc=fail (p=quarantine dis=none) header.from=suse.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=xen-devel-bounces@lists.xenproject.org Received: from list by lists.xenproject.org with outflank-mailman.41643.75030 (Exim 4.92) (envelope-from ) id 1kk0v8-0005Em-V8; Tue, 01 Dec 2020 08:21:58 +0000 X-Outflank-Mailman: Message body and most headers restored to incoming version Received: by outflank-mailman (output) from mailman id 41643.75030; Tue, 01 Dec 2020 08:21:58 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1kk0v8-0005EW-PL; Tue, 01 Dec 2020 08:21:58 +0000 Received: by outflank-mailman (input) for mailman id 41643; Tue, 01 Dec 2020 08:21:57 +0000 Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1kk0v7-0004VK-5p for xen-devel@lists.xenproject.org; Tue, 01 Dec 2020 08:21:57 +0000 Received: from mx2.suse.de (unknown [195.135.220.15]) by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS id dc310c53-1e04-4611-b723-ff45b9e4ba3b; Tue, 01 Dec 2020 08:21:35 +0000 (UTC) Received: from relay2.suse.de (unknown [195.135.221.27]) by mx2.suse.de (Postfix) with ESMTP id 76E9EAD75; Tue, 1 Dec 2020 08:21:34 +0000 (UTC) X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: dc310c53-1e04-4611-b723-ff45b9e4ba3b X-Virus-Scanned: by amavisd-new at test-mx.suse.de DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1; t=1606810894; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc: mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=se4zYxbxCXhFPxdQtqG6gW//2iyLAiNTMs1y0ZSZMwA=; b=VpXkgRaiPqc/ptp+PDZ3uGEMAf5rWeWN3Kp2eX7Wj8zOmp8Tb9AC9suqBG46hI5y3pKFTS 8BPfOUFHdpKdUFulJm1s1QTnAGPzECVzPiOLnB2PaBz9mrJf2Y8p8fUyfTtbCUA6AzXt3+ 2N0W0pdw4AVqEidIu+euFXhq40JrPzg= From: Juergen Gross To: xen-devel@lists.xenproject.org Cc: Juergen Gross , Andrew Cooper , George Dunlap , Ian Jackson , Jan Beulich , Julien Grall , Stefano Stabellini , Wei Liu , Dario Faggioli Subject: [PATCH v2 16/17] xen/cpupool: add scheduling granularity entry to cpupool entries Date: Tue, 1 Dec 2020 09:21:27 +0100 Message-Id: <20201201082128.15239-17-jgross@suse.com> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20201201082128.15239-1-jgross@suse.com> References: <20201201082128.15239-1-jgross@suse.com> MIME-Version: 1.0 Add a "sched-gran" entry to the per-cpupool hypfs directories. For now make this entry read-only and let it contain one of the strings "cpu", "core" or "socket". Signed-off-by: Juergen Gross --- V2: - added const (Jan Beulich) - modify test in cpupool_gran_read() (Jan Beulich) --- docs/misc/hypfs-paths.pandoc | 4 +++ xen/common/sched/cpupool.c | 69 ++++++++++++++++++++++++++++++++++-- 2 files changed, 70 insertions(+), 3 deletions(-) diff --git a/docs/misc/hypfs-paths.pandoc b/docs/misc/hypfs-paths.pandoc index aaca1cdf92..f1ce24d7fe 100644 --- a/docs/misc/hypfs-paths.pandoc +++ b/docs/misc/hypfs-paths.pandoc @@ -184,6 +184,10 @@ A directory of all current cpupools. The individual cpupools. Each entry is a directory with the name being the cpupool-id (e.g. /cpupool/0/). +#### /cpupool/*/sched-gran = ("cpu" | "core" | "socket") + +The scheduling granularity of a cpupool. + #### /params/ A directory of runtime parameters. diff --git a/xen/common/sched/cpupool.c b/xen/common/sched/cpupool.c index 3e17fdf95b..cfc75ccbe4 100644 --- a/xen/common/sched/cpupool.c +++ b/xen/common/sched/cpupool.c @@ -41,9 +41,10 @@ static DEFINE_SPINLOCK(cpupool_lock); static enum sched_gran __read_mostly opt_sched_granularity = SCHED_GRAN_cpu; static unsigned int __read_mostly sched_granularity = 1; +#define SCHED_GRAN_NAME_LEN 8 struct sched_gran_name { enum sched_gran mode; - char name[8]; + char name[SCHED_GRAN_NAME_LEN]; }; static const struct sched_gran_name sg_name[] = { @@ -52,7 +53,7 @@ static const struct sched_gran_name sg_name[] = { {SCHED_GRAN_socket, "socket"}, }; -static void sched_gran_print(enum sched_gran mode, unsigned int gran) +static const char *sched_gran_get_name(enum sched_gran mode) { const char *name = ""; unsigned int i; @@ -66,8 +67,13 @@ static void sched_gran_print(enum sched_gran mode, unsigned int gran) } } + return name; +} + +static void sched_gran_print(enum sched_gran mode, unsigned int gran) +{ printk("Scheduling granularity: %s, %u CPU%s per sched-resource\n", - name, gran, gran == 1 ? "" : "s"); + sched_gran_get_name(mode), gran, gran == 1 ? "" : "s"); } #ifdef CONFIG_HAS_SCHED_GRANULARITY @@ -1033,9 +1039,14 @@ static int cpupool_dir_read(const struct hypfs_entry *entry, int ret = 0; const struct cpupool *c; unsigned int size = 0; + struct hypfs_dyndir_id *data; + + data = hypfs_get_dyndata(); list_for_each_entry(c, &cpupool_list, list) { + data->id = c->cpupool_id; + size += hypfs_dynid_entry_size(entry, c->cpupool_id); ret = hypfs_read_dyndir_id_entry(&cpupool_pooldir, c->cpupool_id, @@ -1100,6 +1111,56 @@ static struct hypfs_entry *cpupool_dir_findentry( return hypfs_gen_dyndir_entry_id(&cpupool_pooldir, id); } +static int cpupool_gran_read(const struct hypfs_entry *entry, + XEN_GUEST_HANDLE_PARAM(void) uaddr) +{ + const struct hypfs_dyndir_id *data; + const struct cpupool *cpupool; + const char *gran; + + data = hypfs_get_dyndata(); + cpupool = __cpupool_find_by_id(data->id, true); + ASSERT(cpupool); + + gran = sched_gran_get_name(cpupool->gran); + + if ( !*gran ) + return -ENOENT; + + return copy_to_guest(uaddr, gran, strlen(gran) + 1) ? -EFAULT : 0; +} + +static unsigned int hypfs_gran_getsize(const struct hypfs_entry *entry) +{ + const struct hypfs_dyndir_id *data; + const struct cpupool *cpupool; + const char *gran; + + data = hypfs_get_dyndata(); + cpupool = __cpupool_find_by_id(data->id, true); + ASSERT(cpupool); + + gran = sched_gran_get_name(cpupool->gran); + + return strlen(gran) + 1; +} + +static struct hypfs_funcs cpupool_gran_funcs = { + .enter = hypfs_node_enter, + .exit = hypfs_node_exit, + .read = cpupool_gran_read, + .write = hypfs_write_deny, + .getsize = hypfs_gran_getsize, + .findentry = hypfs_leaf_findentry, +}; + +static HYPFS_VARSIZE_INIT(cpupool_gran, XEN_HYPFS_TYPE_STRING, "sched-gran", + 0, &cpupool_gran_funcs); +static char granstr[SCHED_GRAN_NAME_LEN] = { + [0 ... SCHED_GRAN_NAME_LEN - 2] = '?', + [SCHED_GRAN_NAME_LEN - 1] = 0 +}; + static struct hypfs_funcs cpupool_dir_funcs = { .enter = cpupool_dir_enter, .exit = cpupool_dir_exit, @@ -1115,6 +1176,8 @@ static void cpupool_hypfs_init(void) { hypfs_add_dir(&hypfs_root, &cpupool_dir, true); hypfs_add_dyndir(&cpupool_dir, &cpupool_pooldir); + hypfs_string_set_reference(&cpupool_gran, granstr); + hypfs_add_leaf(&cpupool_pooldir, &cpupool_gran, true); } #else From patchwork Tue Dec 1 08:21:28 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: =?utf-8?b?SsO8cmdlbiBHcm/Dnw==?= X-Patchwork-Id: 11942235 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-18.7 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER, INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED, USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 470ABC64E7A for ; Tue, 1 Dec 2020 08:22:18 +0000 (UTC) Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id C5833206E3 for ; Tue, 1 Dec 2020 08:22:17 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (1024-bit key) header.d=suse.com header.i=@suse.com header.b="IkpN9Wyg" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org C5833206E3 Authentication-Results: mail.kernel.org; dmarc=fail (p=quarantine dis=none) header.from=suse.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=xen-devel-bounces@lists.xenproject.org Received: from list by lists.xenproject.org with outflank-mailman.41658.75078 (Exim 4.92) (envelope-from ) id 1kk0vJ-0005bl-2e; Tue, 01 Dec 2020 08:22:09 +0000 X-Outflank-Mailman: Message body and most headers restored to incoming version Received: by outflank-mailman (output) from mailman id 41658.75078; Tue, 01 Dec 2020 08:22:08 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1kk0vI-0005bY-Pd; Tue, 01 Dec 2020 08:22:08 +0000 Received: by outflank-mailman (input) for mailman id 41658; Tue, 01 Dec 2020 08:22:07 +0000 Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1kk0vH-0004VK-6D for xen-devel@lists.xenproject.org; Tue, 01 Dec 2020 08:22:07 +0000 Received: from mx2.suse.de (unknown [195.135.220.15]) by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS id 25f21fdf-10c8-425f-b015-8b72c954f7ae; Tue, 01 Dec 2020 08:21:35 +0000 (UTC) Received: from relay2.suse.de (unknown [195.135.221.27]) by mx2.suse.de (Postfix) with ESMTP id BA1F5AD8A; Tue, 1 Dec 2020 08:21:34 +0000 (UTC) X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: 25f21fdf-10c8-425f-b015-8b72c954f7ae X-Virus-Scanned: by amavisd-new at test-mx.suse.de DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1; t=1606810894; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc: mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=3G0xgIlL/9kX9DtKJGfu7x/TxV+adakU1t5bKB7Shos=; b=IkpN9WygXUSH6MIqzZPiyHDmb45LMWErKCm8iYpw9oM5Mo9lo6NFRIhzEfKRtrBCK30H+G I9itFnpDgiqpTj4f4xPcwoimg7/sfsc/K7+kG2Vzw2hBWABakSekL3R+30plQDJappTYyh pKHnK5heJzej4UKjt411+cYbvpyHkMs= From: Juergen Gross To: xen-devel@lists.xenproject.org Cc: Juergen Gross , Andrew Cooper , George Dunlap , Ian Jackson , Jan Beulich , Julien Grall , Stefano Stabellini , Wei Liu , Dario Faggioli Subject: [PATCH v2 17/17] xen/cpupool: make per-cpupool sched-gran hypfs node writable Date: Tue, 1 Dec 2020 09:21:28 +0100 Message-Id: <20201201082128.15239-18-jgross@suse.com> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20201201082128.15239-1-jgross@suse.com> References: <20201201082128.15239-1-jgross@suse.com> MIME-Version: 1.0 Make /cpupool//sched-gran in hypfs writable. This will enable per cpupool selectable scheduling granularity. Writing this node is allowed only with no cpu assigned to the cpupool. Allowed are values "cpu", "core" and "socket". Signed-off-by: Juergen Gross --- V2: - test user parameters earlier (Jan Beulich) --- docs/misc/hypfs-paths.pandoc | 5 ++- xen/common/sched/cpupool.c | 70 ++++++++++++++++++++++++++++++------ 2 files changed, 63 insertions(+), 12 deletions(-) diff --git a/docs/misc/hypfs-paths.pandoc b/docs/misc/hypfs-paths.pandoc index f1ce24d7fe..e86f7d0dbe 100644 --- a/docs/misc/hypfs-paths.pandoc +++ b/docs/misc/hypfs-paths.pandoc @@ -184,10 +184,13 @@ A directory of all current cpupools. The individual cpupools. Each entry is a directory with the name being the cpupool-id (e.g. /cpupool/0/). -#### /cpupool/*/sched-gran = ("cpu" | "core" | "socket") +#### /cpupool/*/sched-gran = ("cpu" | "core" | "socket") [w] The scheduling granularity of a cpupool. +Writing a value is allowed only for cpupools with no cpu assigned and if the +architecture is supporting different scheduling granularities. + #### /params/ A directory of runtime parameters. diff --git a/xen/common/sched/cpupool.c b/xen/common/sched/cpupool.c index cfc75ccbe4..b1d9507978 100644 --- a/xen/common/sched/cpupool.c +++ b/xen/common/sched/cpupool.c @@ -77,7 +77,7 @@ static void sched_gran_print(enum sched_gran mode, unsigned int gran) } #ifdef CONFIG_HAS_SCHED_GRANULARITY -static int __init sched_select_granularity(const char *str) +static int sched_gran_get(const char *str, enum sched_gran *mode) { unsigned int i; @@ -85,36 +85,43 @@ static int __init sched_select_granularity(const char *str) { if ( strcmp(sg_name[i].name, str) == 0 ) { - opt_sched_granularity = sg_name[i].mode; + *mode = sg_name[i].mode; return 0; } } return -EINVAL; } + +static int __init sched_select_granularity(const char *str) +{ + return sched_gran_get(str, &opt_sched_granularity); +} custom_param("sched-gran", sched_select_granularity); +#else +static int sched_gran_get(const char *str, enum sched_gran *mode) +{ + return -EINVAL; +} #endif -static unsigned int __init cpupool_check_granularity(void) +static unsigned int cpupool_check_granularity(enum sched_gran mode) { unsigned int cpu; unsigned int siblings, gran = 0; - if ( opt_sched_granularity == SCHED_GRAN_cpu ) + if ( mode == SCHED_GRAN_cpu ) return 1; for_each_online_cpu ( cpu ) { - siblings = cpumask_weight(sched_get_opt_cpumask(opt_sched_granularity, - cpu)); + siblings = cpumask_weight(sched_get_opt_cpumask(mode, cpu)); if ( gran == 0 ) gran = siblings; else if ( gran != siblings ) return 0; } - sched_disable_smt_switching = true; - return gran; } @@ -126,7 +133,7 @@ static void __init cpupool_gran_init(void) while ( gran == 0 ) { - gran = cpupool_check_granularity(); + gran = cpupool_check_granularity(opt_sched_granularity); if ( gran == 0 ) { @@ -152,6 +159,9 @@ static void __init cpupool_gran_init(void) if ( fallback ) warning_add(fallback); + if ( opt_sched_granularity != SCHED_GRAN_cpu ) + sched_disable_smt_switching = true; + sched_granularity = gran; sched_gran_print(opt_sched_granularity, sched_granularity); } @@ -1145,17 +1155,55 @@ static unsigned int hypfs_gran_getsize(const struct hypfs_entry *entry) return strlen(gran) + 1; } +static int cpupool_gran_write(struct hypfs_entry_leaf *leaf, + XEN_GUEST_HANDLE_PARAM(void) uaddr, + unsigned int ulen) +{ + const struct hypfs_dyndir_id *data; + struct cpupool *cpupool; + enum sched_gran gran; + unsigned int sched_gran = 0; + char name[SCHED_GRAN_NAME_LEN]; + int ret = 0; + + if ( ulen > SCHED_GRAN_NAME_LEN ) + return -ENOSPC; + + if ( copy_from_guest(name, uaddr, ulen) ) + return -EFAULT; + + if ( memchr(name, 0, ulen) == (name + ulen - 1) ) + sched_gran = sched_gran_get(name, &gran) ? + 0 : cpupool_check_granularity(gran); + if ( sched_gran == 0 ) + return -EINVAL; + + data = hypfs_get_dyndata(); + cpupool = __cpupool_find_by_id(data->id, true); + ASSERT(cpupool); + + if ( !cpumask_empty(cpupool->cpu_valid) ) + ret = -EBUSY; + else + { + cpupool->gran = gran; + cpupool->sched_gran = sched_gran; + } + + return ret; +} + static struct hypfs_funcs cpupool_gran_funcs = { .enter = hypfs_node_enter, .exit = hypfs_node_exit, .read = cpupool_gran_read, - .write = hypfs_write_deny, + .write = cpupool_gran_write, .getsize = hypfs_gran_getsize, .findentry = hypfs_leaf_findentry, }; static HYPFS_VARSIZE_INIT(cpupool_gran, XEN_HYPFS_TYPE_STRING, "sched-gran", - 0, &cpupool_gran_funcs); + SCHED_GRAN_NAME_LEN, &cpupool_gran_funcs); static char granstr[SCHED_GRAN_NAME_LEN] = { [0 ... SCHED_GRAN_NAME_LEN - 2] = '?', [SCHED_GRAN_NAME_LEN - 1] = 0