From patchwork Wed Apr 20 21:42:55 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Kalesh Singh X-Patchwork-Id: 12820838 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 4B837C433F5 for ; Wed, 20 Apr 2022 21:47:11 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:Cc:From:Subject:References:Mime-Version :Message-Id:In-Reply-To:Date:Reply-To:To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=T0QSy5pf1dbvDDucJDPQa8AC5/+Bm4gMReDqM7C2Yrs=; b=kzT8Jlh23sSGvs r1l3GnH2d/zJWNxcmeStePRwnMK3RzfqFyHVW8lPDs15xbLzGwc5dX1rrj9awnZ2SeV2yZmBYmU1W FPCTj7BlxGbHMSBZO9XT1jVfWtiIc5l8A41xvnkIutqlMqpB72opy3S8SsDY2BvqfJD5c7NonksoX G2ImsCtLCvXptsOyYvwfW2SCnXBPwPU1D5UiGBOlw2FZvNYeY/SMFGZGylhCv6JAtmhW9E3i+WgG+ H9FD+c4n5gpO8h0FEEnzehe9J57z0RU8AX/az3WesCmBbWlvtm0/n2h4AuV83KgNP7jWfrHZCZnRX XkIbeQtdVPFK/OlBwFDQ==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1nhI9E-00AUo1-0p; Wed, 20 Apr 2022 21:46:04 +0000 Received: from mail-yw1-x1149.google.com ([2607:f8b0:4864:20::1149]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1nhI9A-00AUkI-9p for linux-arm-kernel@lists.infradead.org; Wed, 20 Apr 2022 21:46:01 +0000 Received: by mail-yw1-x1149.google.com with SMTP id 00721157ae682-2f198b4e2d1so26911537b3.14 for ; Wed, 20 Apr 2022 14:45:59 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:cc; bh=lwg1kG8+1/wNbjGSZ2PbJ4ea1sTg4029wAFJD+g5Qi4=; b=MMPu74D6gv8xbJTQoLjt4UXwW+FjK1985Y1rOEoSiYg5gcI/zj6BJ7wbII6XHlcPvT 59BtqlU6lweX8m7XAawj/sGBvGhUoQEMvEoH66ZzG/M6VpOnudkU8Y4wq1Zazhe8YFeG VBU9mzPnHlb1w/OgY5JgQnc6KWd3K7yWmcD2S+RRlARxRVWS1VObFH9mAk/7S3g+3Qsz YXIIEl1zXt4sfgfpG3aW5xYWQK63b95pfOWsVjGxqQTDcSjvBn9I7rs+x5x2mcwC3zXD Pb2zPc5J59jODCdsDLIkvrDpuJGvOo368pyQKea6bgOY8ieTr7QPh3zxQ3jmVVvSVsku YaLw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:cc; bh=lwg1kG8+1/wNbjGSZ2PbJ4ea1sTg4029wAFJD+g5Qi4=; b=OsYccRtc/srP/UoPUVLrImFk+bqLqt1tT3zKFcNHXj83ypZSOBE8C6+nLwJI1J3DYN hz/Hcqp5/PMSmL7aAZFGrfnBc3iE0OWMsqCE8/Ma4I9YPzkMUduVIH+GLIXa92MS1F2b 6kI73S1zarLaTlNlTx0rN3IlKYcewptgWWnqKg9D2Lweg8RHmp3t5/qGq0pdUL8obD1U 2G3Au1A3kJks3BkSZX0Zrl5Xr0Zms7lBGhCFzkRH9tupIegFqKHg/Mo1QPeFQganE//U JvbR0i+JWg6gtELzEE1B0warK/y8kzN2ZmxbXdSg0Ih6c0219yadFc4PjTAOB2MpKXTT DjkA== X-Gm-Message-State: AOAM530y8ZKhIOqHd6MSRR7kCjXccOwFdmj15orYjW2HECwRGvGM74pd az9HDpNr+TOV9Z5xcBU7AmvcrvZBUV4zFvH/2A== X-Google-Smtp-Source: ABdhPJwPvA9EBgeORhcNpGmqtT4iVpoccjQSHhBaKdbuYg64D0FqAMR8uclEjMkHcXhbagt529YZzXT1QLlZO+fSMw== X-Received: from kaleshsingh.mtv.corp.google.com ([2620:15c:211:200:2a20:cec4:8f41:cd6f]) (user=kaleshsingh job=sendgmr) by 2002:a81:ff06:0:b0:2e6:d7bc:c812 with SMTP id k6-20020a81ff06000000b002e6d7bcc812mr22513276ywn.122.1650491158313; Wed, 20 Apr 2022 14:45:58 -0700 (PDT) Date: Wed, 20 Apr 2022 14:42:55 -0700 In-Reply-To: <20220420214317.3303360-1-kaleshsingh@google.com> Message-Id: <20220420214317.3303360-5-kaleshsingh@google.com> Mime-Version: 1.0 References: <20220420214317.3303360-1-kaleshsingh@google.com> X-Mailer: git-send-email 2.36.0.rc0.470.gd361397f0d-goog Subject: [PATCH v8 4/6] KVM: arm64: Add guard pages for pKVM (protected nVHE) hypervisor stack From: Kalesh Singh Cc: will@kernel.org, maz@kernel.org, qperret@google.com, tabba@google.com, surenb@google.com, kernel-team@android.com, Kalesh Singh , James Morse , Alexandru Elisei , Suzuki K Poulose , Catalin Marinas , Andrew Walbran , Mark Rutland , Andrew Jones , Ard Biesheuvel , Nick Desaulniers , Masahiro Yamada , Nathan Chancellor , Changbin Du , linux-arm-kernel@lists.infradead.org, kvmarm@lists.cs.columbia.edu, linux-kernel@vger.kernel.org X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20220420_144600_379417_794AC38F X-CRM114-Status: GOOD ( 17.72 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org Map the stack pages in the flexible private VA range and allocate guard pages below the stack as unbacked VA space. The stack is aligned so that any valid stack address has PAGE_SHIFT bit as 1 - this is used for overflow detection (implemented in a subsequent patch in the series) Signed-off-by: Kalesh Singh Tested-by: Fuad Tabba Reviewed-by: Fuad Tabba --- Changes in v7: - Add Fuad's Reviewed-by and Tested-by tags. Changes in v6: - Update call to pkvm_alloc_private_va_range() (return val and params) Changes in v5: - Use a single allocation for stack and guard pages to ensure they are contiguous, per Marc Changes in v4: - Replace IS_ERR_OR_NULL check with IS_ERR check now that pkvm_alloc_private_va_range() returns an error for null pointer, per Fuad Changes in v3: - Handle null ptr in IS_ERR_OR_NULL checks, per Mark arch/arm64/kvm/hyp/nvhe/setup.c | 31 ++++++++++++++++++++++++++++--- 1 file changed, 28 insertions(+), 3 deletions(-) diff --git a/arch/arm64/kvm/hyp/nvhe/setup.c b/arch/arm64/kvm/hyp/nvhe/setup.c index 27af337f9fea..e8d4ea2fcfa0 100644 --- a/arch/arm64/kvm/hyp/nvhe/setup.c +++ b/arch/arm64/kvm/hyp/nvhe/setup.c @@ -99,17 +99,42 @@ static int recreate_hyp_mappings(phys_addr_t phys, unsigned long size, return ret; for (i = 0; i < hyp_nr_cpus; i++) { + struct kvm_nvhe_init_params *params = per_cpu_ptr(&kvm_init_params, i); + unsigned long hyp_addr; + start = (void *)kern_hyp_va(per_cpu_base[i]); end = start + PAGE_ALIGN(hyp_percpu_size); ret = pkvm_create_mappings(start, end, PAGE_HYP); if (ret) return ret; - end = (void *)per_cpu_ptr(&kvm_init_params, i)->stack_hyp_va; - start = end - PAGE_SIZE; - ret = pkvm_create_mappings(start, end, PAGE_HYP); + /* + * Allocate a contiguous HYP private VA range for the stack + * and guard page. The allocation is also aligned based on + * the order of its size. + */ + ret = pkvm_alloc_private_va_range(PAGE_SIZE * 2, &hyp_addr); + if (ret) + return ret; + + /* + * Since the stack grows downwards, map the stack to the page + * at the higher address and leave the lower guard page + * unbacked. + * + * Any valid stack address now has the PAGE_SHIFT bit as 1 + * and addresses corresponding to the guard page have the + * PAGE_SHIFT bit as 0 - this is used for overflow detection. + */ + hyp_spin_lock(&pkvm_pgd_lock); + ret = kvm_pgtable_hyp_map(&pkvm_pgtable, hyp_addr + PAGE_SIZE, + PAGE_SIZE, params->stack_pa, PAGE_HYP); + hyp_spin_unlock(&pkvm_pgd_lock); if (ret) return ret; + + /* Update stack_hyp_va to end of the stack's private VA range */ + params->stack_hyp_va = hyp_addr + (2 * PAGE_SIZE); } /*