From patchwork Mon Apr 3 23:15:50 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Radu Rendec X-Patchwork-Id: 13198920 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 78B74C761AF for ; Mon, 3 Apr 2023 23:19:12 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=TXjPJnOLuGTMZ/d3e69Zl+34bd68Fxz+9j58foGFf2s=; b=bOz7smQz5gkutN L6R4E6/jNNu2j83V8zFGvntyELBJQejm19CJlCapv1nXw+MI6cyWqCDAsA60TpsHsW+zqWUijnJzH 0Pdy7+ffmI/wspDKQtrjdiD1DGBpvAYnNelEdxabzGZBFsbuip6v/HsvY3TbyTyuyO43xlS9+zzMy go6AleQQYcrl9X4tfzbWyJ+Jz49I2BXrY+cBGSx5+r6AS3GlT2kDi/inHSICRby6YVO7pV92ijWiu IoJO7i046rLScHtHbXqmbEteUrvBeSGn5hsmkBcK/Q6sb51WwfPc+dfqbJFUdkIrgfJ8qwMePeoQM YRe+mbN1PReSefWM1tqw==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.96 #2 (Red Hat Linux)) id 1pjTRL-00HA47-3D; Mon, 03 Apr 2023 23:18:20 +0000 Received: from us-smtp-delivery-124.mimecast.com ([170.10.133.124]) by bombadil.infradead.org with esmtps (Exim 4.96 #2 (Red Hat Linux)) id 1pjTRJ-00HA37-0v for linux-arm-kernel@lists.infradead.org; Mon, 03 Apr 2023 23:18:18 +0000 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1680563895; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=YH3UKoR8rYIu0iuQWuLZX3jQHhACfQOBf+mSwnXGA0E=; b=OF/3p07mIrIdjaApBToNsQNjkwTLsBr48b1F789X5sbFD2i83flIWNKL4FBwjVKzP18sEq RIBIZlvL/QiX3TmWTVl7+5B858v8reVsZo7rneBvj54JMkcChPb1YakmJ7lNAI5woHuM5h C9dPLxbhgJgkIcsmU8qv7+IPM0ZW7oo= Received: from mail-qt1-f197.google.com (mail-qt1-f197.google.com [209.85.160.197]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-674-aPin_Z9rPDO_titdYSL82w-1; Mon, 03 Apr 2023 19:18:14 -0400 X-MC-Unique: aPin_Z9rPDO_titdYSL82w-1 Received: by mail-qt1-f197.google.com with SMTP id u22-20020a05622a011600b003dfd61e8594so20860356qtw.15 for ; Mon, 03 Apr 2023 16:18:14 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; t=1680563894; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=YH3UKoR8rYIu0iuQWuLZX3jQHhACfQOBf+mSwnXGA0E=; b=RsrKfbw/39Bf70IYBvaieQ7CZuf2l5ufKUK6MOdeYDzD2XxsZ3k9hIS+FUruN0NjnJ 2FhuIsvVVovo9Hzlbk5bxcDm+BmaZmwHy5R2JhxldVC6dM6u51gpg3MtYHps8FmFX0u9 h2gCKKROPA5nz56qYyDBgZkDMBYFzUm0ESelEw8+yGMvRTavMHWX6+nBM1hfGdyr33wd ss7ZW0EG1k2vNmhmTNT81c/tl0sJcfNaivlMZYmOPwPGsNiBDn1vIa75hjdi6ISmJhnw D7c6e2tnRuBEzV0zG/mc+vZ76vnZCSBQu589SLRn9tCZNL/Cr9ZzT4gz2u+ltnzWXFjP BEdQ== X-Gm-Message-State: AAQBX9du4DA1NvzZd6lhUSsKVtJpBW4lWO05uuqEvxFybRIacWu0dLzO BG0Z5ZzyImqIaGjKMME3+dxY8gqR3v9mJdrxjix8lHkjch5obv1LL3BRgQtsww9f186sE4ddapS G5XK3L/G7rz2DG0gS7UDb44KKrUfPEZMBol0= X-Received: by 2002:a05:6214:5081:b0:5bf:ba9d:8726 with SMTP id kk1-20020a056214508100b005bfba9d8726mr833596qvb.10.1680563893959; Mon, 03 Apr 2023 16:18:13 -0700 (PDT) X-Google-Smtp-Source: AKy350bvulUg4Ye6CyGProp/t+paS7VYN/rG4/aQjokPDKzNvBopwtBgayhPoP6XkvhqPTIPssBQdw== X-Received: by 2002:a05:6214:5081:b0:5bf:ba9d:8726 with SMTP id kk1-20020a056214508100b005bfba9d8726mr833571qvb.10.1680563893700; Mon, 03 Apr 2023 16:18:13 -0700 (PDT) Received: from thinkpad-p1.kanata.rendec.net (cpe00fc8d79db03-cm00fc8d79db00.cpe.net.fido.ca. [72.137.118.218]) by smtp.gmail.com with ESMTPSA id v6-20020ad45346000000b005e231177992sm2670207qvs.74.2023.04.03.16.18.12 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 03 Apr 2023 16:18:13 -0700 (PDT) From: Radu Rendec To: linux-kernel@vger.kernel.org Cc: Catalin Marinas , Will Deacon , Pierre Gondois , Sudeep Holla , linux-arm-kernel@lists.infradead.org Subject: [PATCH v2 1/2] cacheinfo: Add arch specific early level initializer Date: Mon, 3 Apr 2023 19:15:50 -0400 Message-Id: <20230403231551.1090704-2-rrendec@redhat.com> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20230403231551.1090704-1-rrendec@redhat.com> References: <20230403231551.1090704-1-rrendec@redhat.com> MIME-Version: 1.0 X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: redhat.com X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20230403_161817_403746_63D075D6 X-CRM114-Status: GOOD ( 25.84 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org This patch gives of architecture specific code the ability to initialize the cache level and allocate cacheinfo memory early, when cache level initialization runs on the primary CPU for all possible CPUs. This is part of a patch series that attempts to further the work in commit 5944ce092b97 ("arch_topology: Build cacheinfo from primary CPU"). Previously, in the absence of any DT/ACPI cache info, architecture specific cache detection and info allocation for secondary CPUs would happen in non-preemptible context during early CPU initialization and trigger a "BUG: sleeping function called from invalid context" splat on an RT kernel. More specifically, this patch adds the early_cache_level() function, which is called by fetch_cache_info() as a fallback when the number of cache leaves cannot be extracted from DT/ACPI. In the default generic (weak) implementation, this new function returns -ENOENT, which preserves the original behavior for architectures that do not implement the function. Since early detection can get the number of cache leaves wrong in some cases*, additional logic is added to still call init_cache_level() later on the secondary CPU, therefore giving the architecture specific code an opportunity to go back and fix the initial guess. Again, the original behavior is preserved for architectures that do not implement the new function. * For example, on arm64, CLIDR_EL1 detection works only when it runs on the current CPU. In other words, a CPU cannot detect the cache depth for any other CPU than itself. Signed-off-by: Radu Rendec --- drivers/base/cacheinfo.c | 57 ++++++++++++++++++++++++++------------- include/linux/cacheinfo.h | 2 ++ 2 files changed, 40 insertions(+), 19 deletions(-) diff --git a/drivers/base/cacheinfo.c b/drivers/base/cacheinfo.c index f6573c335f4c..7f8ac0cb549f 100644 --- a/drivers/base/cacheinfo.c +++ b/drivers/base/cacheinfo.c @@ -398,6 +398,11 @@ static void free_cache_attributes(unsigned int cpu) cache_shared_cpu_map_remove(cpu); } +int __weak early_cache_level(unsigned int cpu) +{ + return -ENOENT; +} + int __weak init_cache_level(unsigned int cpu) { return -ENOENT; @@ -423,51 +428,65 @@ int allocate_cache_info(int cpu) int fetch_cache_info(unsigned int cpu) { - struct cpu_cacheinfo *this_cpu_ci; + struct cpu_cacheinfo *this_cpu_ci = get_cpu_cacheinfo(cpu); unsigned int levels = 0, split_levels = 0; int ret; - if (acpi_disabled) { + if (acpi_disabled) ret = init_of_cache_level(cpu); - if (ret < 0) - return ret; - } else { + else { ret = acpi_get_cache_info(cpu, &levels, &split_levels); - if (ret < 0) + if (!ret) { + this_cpu_ci->num_levels = levels; + /* + * This assumes that: + * - there cannot be any split caches (data/instruction) + * above a unified cache + * - data/instruction caches come by pair + */ + this_cpu_ci->num_leaves = levels + split_levels; + } + } + + if (ret || !cache_leaves(cpu)) { + ret = early_cache_level(cpu); + if (ret) return ret; - this_cpu_ci = get_cpu_cacheinfo(cpu); - this_cpu_ci->num_levels = levels; - /* - * This assumes that: - * - there cannot be any split caches (data/instruction) - * above a unified cache - * - data/instruction caches come by pair - */ - this_cpu_ci->num_leaves = levels + split_levels; + if (!cache_leaves(cpu)) + return -ENOENT; + + this_cpu_ci->early_arch_info = true; } - if (!cache_leaves(cpu)) - return -ENOENT; return allocate_cache_info(cpu); } int detect_cache_attributes(unsigned int cpu) { + unsigned int early_leaves = cache_leaves(cpu); int ret; /* Since early initialization/allocation of the cacheinfo is allowed * via fetch_cache_info() and this also gets called as CPU hotplug * callbacks via cacheinfo_cpu_online, the init/alloc can be skipped * as it will happen only once (the cacheinfo memory is never freed). - * Just populate the cacheinfo. + * Just populate the cacheinfo. However, if the cacheinfo has been + * allocated early through the arch-specific early_cache_level() call, + * there is a chance the info is wrong (this can happen on arm64). In + * that case, call init_cache_level() anyway to give the arch-specific + * code a chance to make things right. */ - if (per_cpu_cacheinfo(cpu)) + if (per_cpu_cacheinfo(cpu) && !ci_cacheinfo(cpu)->early_arch_info) goto populate_leaves; if (init_cache_level(cpu) || !cache_leaves(cpu)) return -ENOENT; + if (cache_leaves(cpu) <= early_leaves) + goto populate_leaves; + + kfree(per_cpu_cacheinfo(cpu)); ret = allocate_cache_info(cpu); if (ret) return ret; diff --git a/include/linux/cacheinfo.h b/include/linux/cacheinfo.h index 908e19d17f49..c9d44308fc42 100644 --- a/include/linux/cacheinfo.h +++ b/include/linux/cacheinfo.h @@ -76,9 +76,11 @@ struct cpu_cacheinfo { unsigned int num_levels; unsigned int num_leaves; bool cpu_map_populated; + bool early_arch_info; }; struct cpu_cacheinfo *get_cpu_cacheinfo(unsigned int cpu); +int early_cache_level(unsigned int cpu); int init_cache_level(unsigned int cpu); int init_of_cache_level(unsigned int cpu); int populate_cache_leaves(unsigned int cpu);