From patchwork Mon Apr 3 23:15:51 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Radu Rendec X-Patchwork-Id: 13198921 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 00F39C76188 for ; Mon, 3 Apr 2023 23:19:12 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=UkSm+VFJiXbcjfrbKTTqi9Bt5oHIRTnKfS+lUIIY7t8=; b=iLpFQ67wPinQQB REGyG3Sq9gzXPfixWC7c+E0NXF0s9XRiMocKD/Nqo+Izvr/EZuB4e0tR7kJWkzkJc3If8TaR2sy5q rwLUcyk8C3C5GRauOY1eDqsbuW73T9vDujxwOX1ufgsNPec6MU9ixjNaecGbwqwWSbWn7b3lSG/ns Q0eL8bExC8Z1DhsT3SFYCWHUXzQJcLAjI0lk59LeSy/+B6zp7plRKrLQnTiLB/VOjHR8bLWyB4HGF 67/XF8quk3UxXextakMYJP8OZ87ZrF2KmDyl7DEOCjyaRG8JDE+QuBs5yIWS7EhBKCYiJGSi3VSqd CTZpMzZb5N34BT1XLFQw==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.96 #2 (Red Hat Linux)) id 1pjTRX-00HA5u-0n; Mon, 03 Apr 2023 23:18:31 +0000 Received: from us-smtp-delivery-124.mimecast.com ([170.10.129.124]) by bombadil.infradead.org with esmtps (Exim 4.96 #2 (Red Hat Linux)) id 1pjTRU-00HA5J-2A for linux-arm-kernel@lists.infradead.org; Mon, 03 Apr 2023 23:18:30 +0000 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1680563907; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=BFLHr8TD8RfqA6eplD+afVyXuh6c0xOW13sUk0OaDso=; b=G1bCVMzEhcL2ZLrGu+4VcKE5ej5U83TzizlogTjop1g1J+GHoX8H//rpMcnNGFiwnnbDZA Kb9X6HYXOi+R9bePLPsulqM3a1p/Rws7MzkYpUIamZaR/2HL302hO7AogBH1Y0Gdafzdps cnuTmwTml6re6+bCI4zdBgrPDzEx2RQ= Received: from mail-qt1-f199.google.com (mail-qt1-f199.google.com [209.85.160.199]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-543-RaFRahPsO9WOizI4eFAAug-1; Mon, 03 Apr 2023 19:18:26 -0400 X-MC-Unique: RaFRahPsO9WOizI4eFAAug-1 Received: by mail-qt1-f199.google.com with SMTP id v7-20020a05622a188700b003e0e27bbc2eso20831897qtc.8 for ; Mon, 03 Apr 2023 16:18:26 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; t=1680563906; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=BFLHr8TD8RfqA6eplD+afVyXuh6c0xOW13sUk0OaDso=; b=qi5kn0Mm6DOPVrI5PYgIHcu5rJHhmWgu8O75z4z8mmndb5knCX7LHB1Oh2AXydQLby kd4bBz6xM2hraK9ZZKIYCaMU5g0zSorTE1GTrOF9e0NLaJF26RPJtEMXTj8pJZfgXFVv bLKvRpyPMKvi+KYYqmz6ub1ji8s/BAvXba8o4xImmOEyUnMiHUOUe9zWnUzWFl0FUw9R /RokN2I2XupvSFFFESYQANTj8NGfI+rXiatiVVP6Mq+HbmrLPt8b5vPtIJv/u4OHNiG0 /BZsgCTtuBfjsaBfET0U2/lUGpFOeW/jP61BS5DXrk2PWOAxwR0GiGjY+95CrwRlBnf9 0YoA== X-Gm-Message-State: AAQBX9eOHCnki9LvTqDHqkF69x1rAgaRp0QRajR7ts68yP8CLK1tlEMN /59dzxQfL3jUdIgPqJAtBL+lRw33m1b+2ARwmrPM/uyPXTV7cU18SOQT4nNLd96FUzcb/EAT23u VgPQvf9EEkj3c8Ubdvg8JAT405BBgz1e4jybHCquT4e0= X-Received: by 2002:ad4:5dcb:0:b0:5b6:eef9:b8f7 with SMTP id m11-20020ad45dcb000000b005b6eef9b8f7mr778704qvh.6.1680563905831; Mon, 03 Apr 2023 16:18:25 -0700 (PDT) X-Google-Smtp-Source: AKy350Z6BgYgGucfGKREPBAxx1VLlrIXk8leHEc8OtAlDEGX+cpBZbJtwN4DICEvNgHx2cM+TfCeSA== X-Received: by 2002:ad4:5dcb:0:b0:5b6:eef9:b8f7 with SMTP id m11-20020ad45dcb000000b005b6eef9b8f7mr778686qvh.6.1680563905606; Mon, 03 Apr 2023 16:18:25 -0700 (PDT) Received: from thinkpad-p1.kanata.rendec.net (cpe00fc8d79db03-cm00fc8d79db00.cpe.net.fido.ca. [72.137.118.218]) by smtp.gmail.com with ESMTPSA id v6-20020ad45346000000b005e231177992sm2670207qvs.74.2023.04.03.16.18.24 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 03 Apr 2023 16:18:25 -0700 (PDT) From: Radu Rendec To: linux-kernel@vger.kernel.org Cc: Catalin Marinas , Will Deacon , Pierre Gondois , Sudeep Holla , linux-arm-kernel@lists.infradead.org Subject: [PATCH v2 2/2] cacheinfo: Add arm64 early level initializer implementation Date: Mon, 3 Apr 2023 19:15:51 -0400 Message-Id: <20230403231551.1090704-3-rrendec@redhat.com> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20230403231551.1090704-1-rrendec@redhat.com> References: <20230403231551.1090704-1-rrendec@redhat.com> MIME-Version: 1.0 X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: redhat.com X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20230403_161828_786589_954A2E55 X-CRM114-Status: GOOD ( 20.48 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org This patch adds an architecture specific early cache level detection handler for arm64. This is basically the CLIDR_EL1 based detection that was previously done (only) in init_cache_level(). This is part of a patch series that attempts to further the work in commit 5944ce092b97 ("arch_topology: Build cacheinfo from primary CPU"). Previously, in the absence of any DT/ACPI cache info, architecture specific cache detection and info allocation for secondary CPUs would happen in non-preemptible context during early CPU initialization and trigger a "BUG: sleeping function called from invalid context" splat on an RT kernel. This patch does not solve the problem completely for RT kernels. It relies on the assumption that on most systems, the CPUs are symmetrical and therefore have the same number of cache leaves. The cacheinfo memory is allocated early (on the primary CPU), relying on the new handler. If later (when CLIDR_EL1 based detection runs again on the secondary CPU) the initial assumption proves to be wrong and the CPU has in fact more leaves, the cacheinfo memory is reallocated, and that still triggers a splat on an RT kernel. In other words, asymmetrical CPU systems *must* still provide cacheinfo data in DT/ACPI to avoid the splat on RT kernels (unless secondary CPUs happen to have less leaves than the primary CPU). But symmetrical CPU systems (the majority) can now get away without the additional DT/ACPI data and rely on CLIDR_EL1 based detection. Signed-off-by: Radu Rendec --- arch/arm64/kernel/cacheinfo.c | 32 ++++++++++++++++++++++++-------- 1 file changed, 24 insertions(+), 8 deletions(-) diff --git a/arch/arm64/kernel/cacheinfo.c b/arch/arm64/kernel/cacheinfo.c index c307f69e9b55..520d17e4ebe9 100644 --- a/arch/arm64/kernel/cacheinfo.c +++ b/arch/arm64/kernel/cacheinfo.c @@ -38,21 +38,37 @@ static void ci_leaf_init(struct cacheinfo *this_leaf, this_leaf->type = type; } -int init_cache_level(unsigned int cpu) +static void detect_cache_level(unsigned int *level, unsigned int *leaves) { - unsigned int ctype, level, leaves; - int fw_level, ret; - struct cpu_cacheinfo *this_cpu_ci = get_cpu_cacheinfo(cpu); + unsigned int ctype; - for (level = 1, leaves = 0; level <= MAX_CACHE_LEVEL; level++) { - ctype = get_cache_type(level); + for (*level = 1, *leaves = 0; *level <= MAX_CACHE_LEVEL; (*level)++) { + ctype = get_cache_type(*level); if (ctype == CACHE_TYPE_NOCACHE) { - level--; + (*level)--; break; } /* Separate instruction and data caches */ - leaves += (ctype == CACHE_TYPE_SEPARATE) ? 2 : 1; + *leaves += (ctype == CACHE_TYPE_SEPARATE) ? 2 : 1; } +} + +int early_cache_level(unsigned int cpu) +{ + struct cpu_cacheinfo *this_cpu_ci = get_cpu_cacheinfo(cpu); + + detect_cache_level(&this_cpu_ci->num_levels, &this_cpu_ci->num_leaves); + + return 0; +} + +int init_cache_level(unsigned int cpu) +{ + unsigned int level, leaves; + int fw_level, ret; + struct cpu_cacheinfo *this_cpu_ci = get_cpu_cacheinfo(cpu); + + detect_cache_level(&level, &leaves); if (acpi_disabled) { fw_level = of_find_last_cache_level(cpu);