From patchwork Mon Dec 16 00:19:18 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Boqun Feng X-Patchwork-Id: 11293249 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 0E0E213B6 for ; Mon, 16 Dec 2019 00:21:02 +0000 (UTC) Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id DCC5124681 for ; Mon, 16 Dec 2019 00:21:01 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="YWwtUpgd" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org DCC5124681 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=gmail.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=xen-devel-bounces@lists.xenproject.org Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1ige6z-0006i3-0Y; Mon, 16 Dec 2019 00:19:45 +0000 Received: from us1-rack-iad1.inumbo.com ([172.99.69.81]) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1ige6x-0006hy-J1 for xen-devel@lists.xenproject.org; Mon, 16 Dec 2019 00:19:43 +0000 X-Inumbo-ID: be0b1a18-1f99-11ea-a1e1-bc764e2007e4 Received: from mail-qt1-x841.google.com (unknown [2607:f8b0:4864:20::841]) by us1-rack-iad1.inumbo.com (Halon) with ESMTPS id be0b1a18-1f99-11ea-a1e1-bc764e2007e4; Mon, 16 Dec 2019 00:19:36 +0000 (UTC) Received: by mail-qt1-x841.google.com with SMTP id g17so3149341qtp.11 for ; Sun, 15 Dec 2019 16:19:36 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=/LWpy3sT0NojI+6lHGq8dZmjGNhrVEiL5TtdHICIZc4=; b=YWwtUpgdB4C7Q4X0xMgmsDhu8K2YcKTpSy81xtryD588YUY101myAfwdgspbcp++85 6rUIKyZFgNjmsF4vh3aNwsJ/DIr5fJmGbJgJRAG4m5DMAcJVWbNmgUvu85ZRr82QbH7N SjPcnTI82np5u2GBasGGK7q0ufep07XQTVE2LKC4dmpRPeGgmVk+Vbvn5SP5P+2/c0d+ PW4xPb5dY7sVukOSE6AtRQFyG1c3wP0wyFooKOHxnXx9A2gSO+syfVPBz/XUSPLqmuEH O7l6gAeT6p4sbwV22GMICMLrj6hUwnTvwhKBHrhQNO6Z1EXh996On8A3IZd8QASa4dLh Jl6Q== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=/LWpy3sT0NojI+6lHGq8dZmjGNhrVEiL5TtdHICIZc4=; b=l3s7X3q51aJuE9+AB/SLAmjhYZRRKfU+HV2NChNqOx8YY3IqmOvZbNMNWbQFtmRIwQ 8y9d+SGVxaBXDdrcR8bMeyJv1DqOlAA092DI6H/kmoX98gGW7Ws2yxJ1qTJzDeDRRzA5 Vjck+iRTAIyIUNFFHk2B3yzurKX0mwYRKKbKtQhvihJt4rqN4v9M+CTWRKVb4aPTpKKJ xhG4s7UY7K/IOqsPkCm/KkqxtDAfQQqB3E3ZfKCGM7Op2yd/MibA/5yNageYrLsUuFOJ QpGP7HF5ZV7tA8kQvjuDfhswRq9j5nGx4tvGx8JZuBplqJRB0J2P+Z1bKggQEYg3pVxQ FIXw== X-Gm-Message-State: APjAAAWI8ImZEguV+eA+d73CEtCsDQXmyYxWKLRirHiCGZOgW/IQw3TT B5+KK1TWthnd6oR89AYExy4= X-Google-Smtp-Source: APXvYqyhIIjZfzlpUFVMB/tiV3ydzxJugA4tfdYzQ1/oDFfstC6D6w9VTmT3ruQGJ7EWghnBfRjF2Q== X-Received: by 2002:aed:2bc2:: with SMTP id e60mr22652718qtd.115.1576455576496; Sun, 15 Dec 2019 16:19:36 -0800 (PST) Received: from auth1-smtp.messagingengine.com (auth1-smtp.messagingengine.com. [66.111.4.227]) by smtp.gmail.com with ESMTPSA id v125sm5409912qka.47.2019.12.15.16.19.35 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Sun, 15 Dec 2019 16:19:36 -0800 (PST) Received: from compute6.internal (compute6.nyi.internal [10.202.2.46]) by mailauth.nyi.internal (Postfix) with ESMTP id 77F3122434; Sun, 15 Dec 2019 19:19:35 -0500 (EST) Received: from mailfrontend1 ([10.202.2.162]) by compute6.internal (MEProxy); Sun, 15 Dec 2019 19:19:35 -0500 X-ME-Sender: X-ME-Proxy-Cause: gggruggvucftvghtrhhoucdtuddrgedufedrvddtgedgvddtucetufdoteggodetrfdotf fvucfrrhhofhhilhgvmecuhfgrshhtofgrihhlpdfqfgfvpdfurfetoffkrfgpnffqhgen uceurghilhhouhhtmecufedttdenucesvcftvggtihhpihgvnhhtshculddquddttddmne goufhorhhtvggutfgvtghiphdvucdlgedtmdenucfjughrpefhvffufffkofgjfhgggfes tdekredtredttdenucfhrhhomhepuehoqhhunhcuhfgvnhhguceosghoqhhunhdrfhgvnh hgsehgmhgrihhlrdgtohhmqeenucfkphephedvrdduheehrdduuddurdejudenucfrrghr rghmpehmrghilhhfrhhomhepsghoqhhunhdomhgvshhmthhprghuthhhphgvrhhsohhnrg hlihhthidqieelvdeghedtieegqddujeejkeehheehvddqsghoqhhunhdrfhgvnhhgpeep ghhmrghilhdrtghomhesfhhigihmvgdrnhgrmhgvnecuvehluhhsthgvrhfuihiivgeptd X-ME-Proxy: Received: from localhost (unknown [52.155.111.71]) by mail.messagingengine.com (Postfix) with ESMTPA id A59FA80059; Sun, 15 Dec 2019 19:19:34 -0500 (EST) From: Boqun Feng To: linux-hyperv@vger.kernel.org, linux-arm-kernel@lists.infradead.org, linux-kernel@vger.kernel.org Date: Mon, 16 Dec 2019 08:19:18 +0800 Message-Id: <20191216001922.23008-3-boqun.feng@gmail.com> X-Mailer: git-send-email 2.24.0 In-Reply-To: <20191216001922.23008-1-boqun.feng@gmail.com> References: <20191216001922.23008-1-boqun.feng@gmail.com> MIME-Version: 1.0 Subject: [Xen-devel] [RFC 2/6] arm64: vdso: Add support for multiple vDSO data pages X-BeenThere: xen-devel@lists.xenproject.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Cc: Sasha Levin , Vincenzo Frascino , Stephen Hemminger , Catalin Marinas , Haiyang Zhang , Michael Kelley , Stefano Stabellini , Matteo Croce , xen-devel@lists.xenproject.org, Thomas Gleixner , "K. Y. Srinivasan" , Will Deacon , Boqun Feng Errors-To: xen-devel-bounces@lists.xenproject.org Sender: "Xen-devel" Split __vdso_abi::vdso_pages into nr_vdso_{data,code}_pages, so that __setup_additional_pages() could work with multiple vDSO data pages with the setup from __vdso_init(). Multiple vDSO data pages are required when running in a virtualized environment, where the cycles read from cntvct at userspace need to be adjusted with some data from a page maintained by the hypervisor. For example, the TSC page in Hyper-V. This is a prerequisite for vDSO support in ARM64 on Hyper-V. Signed-off-by: Boqun Feng (Microsoft) --- arch/arm64/kernel/vdso.c | 43 ++++++++++++++++++++++++---------------- 1 file changed, 26 insertions(+), 17 deletions(-) diff --git a/arch/arm64/kernel/vdso.c b/arch/arm64/kernel/vdso.c index 354b11e27c07..b9b5ec7a3084 100644 --- a/arch/arm64/kernel/vdso.c +++ b/arch/arm64/kernel/vdso.c @@ -50,7 +50,8 @@ struct __vdso_abi { const char *name; const char *vdso_code_start; const char *vdso_code_end; - unsigned long vdso_pages; + unsigned long nr_vdso_data_pages; + unsigned long nr_vdso_code_pages; /* Data Mapping */ struct vm_special_mapping *dm; /* Code Mapping */ @@ -101,6 +102,8 @@ static int __vdso_init(enum arch_vdso_type arch_index) { int i; struct page **vdso_pagelist; + struct page **vdso_code_pagelist; + unsigned long nr_vdso_pages; unsigned long pfn; if (memcmp(vdso_lookup[arch_index].vdso_code_start, "\177ELF", 4)) { @@ -108,14 +111,18 @@ static int __vdso_init(enum arch_vdso_type arch_index) return -EINVAL; } - vdso_lookup[arch_index].vdso_pages = ( + vdso_lookup[arch_index].nr_vdso_data_pages = 1; + + vdso_lookup[arch_index].nr_vdso_code_pages = ( vdso_lookup[arch_index].vdso_code_end - vdso_lookup[arch_index].vdso_code_start) >> PAGE_SHIFT; - /* Allocate the vDSO pagelist, plus a page for the data. */ - vdso_pagelist = kcalloc(vdso_lookup[arch_index].vdso_pages + 1, - sizeof(struct page *), + nr_vdso_pages = vdso_lookup[arch_index].nr_vdso_data_pages + + vdso_lookup[arch_index].nr_vdso_code_pages; + + /* Allocate the vDSO pagelist. */ + vdso_pagelist = kcalloc(nr_vdso_pages, sizeof(struct page *), GFP_KERNEL); if (vdso_pagelist == NULL) return -ENOMEM; @@ -123,15 +130,17 @@ static int __vdso_init(enum arch_vdso_type arch_index) /* Grab the vDSO data page. */ vdso_pagelist[0] = phys_to_page(__pa_symbol(vdso_data)); - /* Grab the vDSO code pages. */ pfn = sym_to_pfn(vdso_lookup[arch_index].vdso_code_start); - for (i = 0; i < vdso_lookup[arch_index].vdso_pages; i++) - vdso_pagelist[i + 1] = pfn_to_page(pfn + i); + vdso_code_pagelist = vdso_pagelist + + vdso_lookup[arch_index].nr_vdso_data_pages; + + for (i = 0; i < vdso_lookup[arch_index].nr_vdso_code_pages; i++) + vdso_code_pagelist[i] = pfn_to_page(pfn + i); - vdso_lookup[arch_index].dm->pages = &vdso_pagelist[0]; - vdso_lookup[arch_index].cm->pages = &vdso_pagelist[1]; + vdso_lookup[arch_index].dm->pages = vdso_pagelist; + vdso_lookup[arch_index].cm->pages = vdso_code_pagelist; return 0; } @@ -141,26 +150,26 @@ static int __setup_additional_pages(enum arch_vdso_type arch_index, struct linux_binprm *bprm, int uses_interp) { - unsigned long vdso_base, vdso_text_len, vdso_mapping_len; + unsigned long vdso_base, vdso_text_len, vdso_data_len; void *ret; - vdso_text_len = vdso_lookup[arch_index].vdso_pages << PAGE_SHIFT; - /* Be sure to map the data page */ - vdso_mapping_len = vdso_text_len + PAGE_SIZE; + vdso_data_len = vdso_lookup[arch_index].nr_vdso_data_pages << PAGE_SHIFT; + vdso_text_len = vdso_lookup[arch_index].nr_vdso_code_pages << PAGE_SHIFT; - vdso_base = get_unmapped_area(NULL, 0, vdso_mapping_len, 0, 0); + vdso_base = get_unmapped_area(NULL, 0, + vdso_data_len + vdso_text_len, 0, 0); if (IS_ERR_VALUE(vdso_base)) { ret = ERR_PTR(vdso_base); goto up_fail; } - ret = _install_special_mapping(mm, vdso_base, PAGE_SIZE, + ret = _install_special_mapping(mm, vdso_base, vdso_data_len, VM_READ|VM_MAYREAD, vdso_lookup[arch_index].dm); if (IS_ERR(ret)) goto up_fail; - vdso_base += PAGE_SIZE; + vdso_base += vdso_data_len; mm->context.vdso = (void *)vdso_base; ret = _install_special_mapping(mm, vdso_base, vdso_text_len, VM_READ|VM_EXEC|