From patchwork Fri Mar 8 19:12:00 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Oreoluwa Babatunde X-Patchwork-Id: 13587214 Received: from mx0b-0031df01.pphosted.com (mx0b-0031df01.pphosted.com [205.220.180.131]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 7B3A75646E; Fri, 8 Mar 2024 19:13:32 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=205.220.180.131 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1709925214; cv=none; b=cF3xvSKvA8WmleBollQyMhuhMh5wEy0cyqyANnkEhsDetsQbkfvRU4py8erKGQ+GvfRtQ6kK3ywXKz1/Alleo0jlbypPo3NgqbB80uSwnI8euUYKmnDrqlsfmK72BpE3pG8asnTEFAiPVZgZ+3Xbjrear7mmZ8lNrpOKmMpByXc= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1709925214; c=relaxed/simple; bh=kwgPPbpvouyuDS8Eo3OIniBCtnmeLQU5eUCY8b6JMAk=; h=From:To:CC:Subject:Date:Message-ID:MIME-Version:Content-Type; b=P8U5KQIZZ44f9jFFfDB2fi4eXcea6s6ppm8W9eoxqoB++JMYlWtN3VZXFc2sHbf5y7LcuC+UoJhoHjwvVFjj1xPTiPwTS6l165GECbEKO1J8n03VbIgtgRN8uMWFDyZj5M5QpBTsjbkrwctHcq8xTfhCbzFLzX+YdBFZ41taZQU= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=quicinc.com; spf=pass smtp.mailfrom=quicinc.com; dkim=pass (2048-bit key) header.d=quicinc.com header.i=@quicinc.com header.b=nHaE0xSO; arc=none smtp.client-ip=205.220.180.131 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=quicinc.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=quicinc.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=quicinc.com header.i=@quicinc.com header.b="nHaE0xSO" Received: from pps.filterd (m0279868.ppops.net [127.0.0.1]) by mx0a-0031df01.pphosted.com (8.17.1.24/8.17.1.24) with ESMTP id 428EaM2C025654; Fri, 8 Mar 2024 19:12:30 GMT DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=quicinc.com; h= from:to:cc:subject:date:message-id:mime-version :content-transfer-encoding:content-type; s=qcppdkim1; bh=ug01h1H XOZQ1AbxiHTbfVZmebe8pNNeRyUicWT59fLQ=; b=nHaE0xSOR9nJ89bmx7uJh0a awlOwJTQANO2cjqHeKN7iEgUDgIMPruow/MPAerr4fRiNmlSeIE5FVmcx7/eB/32 Rgp8FPxmOZwHimdi5XbSF5e964JDoRunio0IZ/hJLUT58JzL3Ze6UGrdXCOz8Ok4 rncHhk71Vuep7FD04JKkbg4pMB7VO21ibxJDUk2VnxJx3clc2CJGdVgSzIW033Jy x6xivxWq92wwAJqjM95foGb3crdd0hKpyR2srFv0c0SsGIUwIh6bhICHJSYj81Bj qRpjWu0bzvrq3kRuRCn9Gflw2bwGw7hTnEW+vsgPGl7kPNpZRz6vyhTiMe+whxg= = Received: from nalasppmta02.qualcomm.com (Global_NAT1.qualcomm.com [129.46.96.20]) by mx0a-0031df01.pphosted.com (PPS) with ESMTPS id 3wr1wj0yfa-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Fri, 08 Mar 2024 19:12:30 +0000 (GMT) Received: from nalasex01b.na.qualcomm.com (nalasex01b.na.qualcomm.com [10.47.209.197]) by NALASPPMTA02.qualcomm.com (8.17.1.5/8.17.1.5) with ESMTPS id 428JCSGA030010 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Fri, 8 Mar 2024 19:12:28 GMT Received: from hu-obabatun-lv.qualcomm.com (10.49.16.6) by nalasex01b.na.qualcomm.com (10.47.209.197) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1118.40; Fri, 8 Mar 2024 11:12:25 -0800 From: Oreoluwa Babatunde To: , , , , , , , , , , , , , , , , , , , , , , , , , , CC: , , , , , Oreoluwa Babatunde Subject: [PATCH v4 0/4] Dynamic Allocation of the reserved_mem array Date: Fri, 8 Mar 2024 11:12:00 -0800 Message-ID: <20240308191204.819487-1-quic_obabatun@quicinc.com> X-Mailer: git-send-email 2.34.1 Precedence: bulk X-Mailing-List: linux-arm-msm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-ClientProxiedBy: nalasex01a.na.qualcomm.com (10.47.209.196) To nalasex01b.na.qualcomm.com (10.47.209.197) X-QCInternal: smtphost X-Proofpoint-Virus-Version: vendor=nai engine=6200 definitions=5800 signatures=585085 X-Proofpoint-ORIG-GUID: -Lw1lQ0KqaOLCZx-Qhuu3U0EjzpKzBvd X-Proofpoint-GUID: -Lw1lQ0KqaOLCZx-Qhuu3U0EjzpKzBvd X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.272,Aquarius:18.0.1011,Hydra:6.0.619,FMLib:17.11.176.26 definitions=2024-03-08_08,2024-03-06_01,2023-05-22_02 X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 adultscore=0 clxscore=1015 mlxscore=0 priorityscore=1501 mlxlogscore=999 malwarescore=0 suspectscore=0 lowpriorityscore=0 spamscore=0 impostorscore=0 bulkscore=0 phishscore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.19.0-2402120000 definitions=main-2403080152 The reserved_mem array is used to store data for the different reserved memory regions defined in the DT of a device. The array stores information such as region name, node reference, start-address, and size of the different reserved memory regions. The array is currently statically allocated with a size of MAX_RESERVED_REGIONS(64). This means that any system that specifies a number of reserved memory regions greater than MAX_RESERVED_REGIONS(64) will not have enough space to store the information for all the regions. This can be fixed by making the reserved_mem array a dynamically sized array which is allocated using memblock_alloc() based on the exact number of reserved memory regions defined in the DT. On architectures such as arm64, memblock allocated memory is not writable until after the page tables have been setup. This is an issue because the current implementation initializes the reserved memory regions and stores their information in the array before the page tables are setup. Hence, dynamically allocating the reserved_mem array and attempting to write information to it at this point will fail. Therefore, the allocation of the reserved_mem array will need to be done after the page tables have been setup, which means that the reserved memory regions will also need to wait until after the page tables have been setup to be stored in the array. When processing the reserved memory regions defined in the DT, these regions are marked as reserved by calling memblock_reserve(base, size). Where: base = base address of the reserved region. size = the size of the reserved memory region. Depending on if that region is defined using the "no-map" property, memblock_mark_nomap(base, size) is also called. The "no-map" property is used to indicate to the operating system that a mapping of the specified region must NOT be created. This also means that no access (including speculative accesses) is allowed on this region of memory except when it is coming from the device driver that this region of memory is being reserved for.[1] Therefore, it is important to call memblock_reserve() and memblock_mark_nomap() on all the reserved memory regions before the system sets up the page tables so that the system does not unknowingly include any of the no-map reserved memory regions in the memory map. There are two ways to define how/where a reserved memory region is placed in memory: i) Statically-placed reserved memory regions i.e. regions defined with a set start address and size using the "reg" property in the DT. ii) Dynamically-placed reserved memory regions. i.e. regions defined by specifying a range of addresses where they can be placed in memory using the "alloc_ranges" and "size" properties in the DT. The dynamically-placed reserved memory regions get assigned a start address only at runtime. And this needs to be done before the page tables are setup so that memblock_reserve() and memblock_mark_nomap() can be called on the allocated region as explained above. Since the dynamically allocated reserved_mem array can only available after the page tables have been setup, the information for the dynamically-placed reserved memory regions needs to be stored somewhere temporarily until the reserved_mem array is available. Therefore, this series makes use of a temporary static array to store the information of the dynamically-placed reserved memory regions until the reserved_mem array is allocated. Once the reserved_mem array is available, the information is copied over from the temporary array into the reserved_mem array, and the memory for the temporary array is freed back to the system. The information for the statically-placed reserved memory regions does not need to be stored in a temporary array because their starting address is already stored in the devicetree. Hence, the only thing that needs to be done for these regions before the page tables are setup is to call memblock_reserve() and memblock_mark_nomap(). Once the reserved_mem array is allocated, the information for the statically-placed reserved memory regions is added to the array. Note: Because of the use of a temporary array to store the information of the dynamically-placed reserved memory regions, there still exists a limitation of 64 for this particular kind of reserved memory regions. From my observation, these regions are typically small in number and hence I expect this to not be an issue for now. Dependency: This series is dependent on the acceptance of the below patchset for proper behavior on openrisc and sh architecture. The patchset has already been sent out and is pending review from the openrisc and sh maintainters. https://lore.kernel.org/all/1707524971-146908-1-git-send-email-quic_obabatun@quicinc.com/ Patch Versions: v4 (Current Patchset): - Move fdt_init_reserved_mem() back into the unflatten_device_tree() function. - Fix warnings found by Kernel test robot: https://lore.kernel.org/all/202401281219.iIhqs1Si-lkp@intel.com/ https://lore.kernel.org/all/202401281304.tsu89Kcm-lkp@intel.com/ https://lore.kernel.org/all/202401291128.e7tdNh5x-lkp@intel.com/ v3: https://lore.kernel.org/all/20240126235425.12233-1-quic_obabatun@quicinc.com/ - Make use of __initdata to delete the temporary static array after dynamically allocating memory for reserved_mem array using memblock. - Move call to fdt_init_reserved_mem() out of the unflatten_device_tree() function and into architecture specific setup code. - Breaking up the changes for the individual architectures into separate patches. v2: https://lore.kernel.org/all/20231204041339.9902-1-quic_obabatun@quicinc.com/ - Extend changes to all other relevant architectures by moving fdt_init_reserved_mem() into the unflatten_device_tree() function. - Add code to use unflatten devicetree APIs to process the reserved memory regions. v1: https://lore.kernel.org/all/20231019184825.9712-1-quic_obabatun@quicinc.com/ References: [1] https://github.com/devicetree-org/dt-schema/blob/main/dtschema/schemas/reserved-memory/reserved-memory.yaml#L79 Oreoluwa Babatunde (4): of: reserved_mem: Restruture how the reserved memory regions are processed of: reserved_mem: Add code to dynamically allocate reserved_mem array of: reserved_mem: Use the unflatten_devicetree APIs to scan reserved mem. nodes of: reserved_mem: Rename fdt_* functions to refelct use of unflatten_devicetree APIs drivers/of/fdt.c | 39 +++++-- drivers/of/of_private.h | 5 +- drivers/of/of_reserved_mem.c | 186 +++++++++++++++++++++++--------- include/linux/of_reserved_mem.h | 11 +- kernel/dma/coherent.c | 8 +- kernel/dma/contiguous.c | 8 +- kernel/dma/swiotlb.c | 10 +- 7 files changed, 193 insertions(+), 74 deletions(-)