From patchwork Tue Mar 4 19:03:46 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: steven chen X-Patchwork-Id: 14001293 Received: from linux.microsoft.com (linux.microsoft.com [13.77.154.182]) by smtp.subspace.kernel.org (Postfix) with ESMTP id 46AF7253F2A; Tue, 4 Mar 2025 19:04:04 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=13.77.154.182 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1741115046; cv=none; b=pcq3sf6ysanylsJAUGnrjqR0Wh2MTtDMERQp2zLl9rqQPPkutp+DgIwsMi7nNMCBZTd3xlm2lzbo44ksqrEw4X6gzwBMFjrs/E8n0ZA6g/injaPkcI5oC+uwwrkqnnRrkavoQ+HtrWzQkGAW7etuPVMihgA4wyzdw4lS4iYPzc4= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1741115046; c=relaxed/simple; bh=pD1H3asyMZ0VZ1iTcZ+E7bEPXPmDb0ZznjbNPV9kD0U=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=r5m6wSfpPqXzX6RzThvE2b4ur38cIYQQHX19943YBUUdbpcwMJ27RpvEKGwbcNLkyby+X2vx7Tzvp1jCncdMSozeyCtiUMPiu8ceRNasWOgsxfYmU+RiiN1U8+l/erhBydexSqqBqqFV7njqoqT0/4Bw6qx1s2t+Woj0So58HKI= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linux.microsoft.com; spf=pass smtp.mailfrom=linux.microsoft.com; dkim=pass (1024-bit key) header.d=linux.microsoft.com header.i=@linux.microsoft.com header.b=qQVKvgY1; arc=none smtp.client-ip=13.77.154.182 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linux.microsoft.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=linux.microsoft.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=linux.microsoft.com header.i=@linux.microsoft.com header.b="qQVKvgY1" Received: from localhost.localdomain (unknown [167.220.59.4]) by linux.microsoft.com (Postfix) with ESMTPSA id DD945210EAF5; Tue, 4 Mar 2025 11:03:57 -0800 (PST) DKIM-Filter: OpenDKIM Filter v2.11.0 linux.microsoft.com DD945210EAF5 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linux.microsoft.com; s=default; t=1741115038; bh=GjiBuoKXsGyAWW6cnySvQv4nQ9sBfDgvkq6VWxqqivo=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=qQVKvgY1FlgcHfMKgMiLN33jbU7Np8CgnNa/lOH0rXsuul3op0pQkknnhFbUb7B/V 9dCdk2v9eCAnhvMB7lkk/mwGgdfP4JfUcgJJHE9lFWM7TVaOK7VhsdtB9evmSvIrmH 5UZa3Dr5US/BQsqycrB4i5xmrf10q1QnFAQcHeZw= From: steven chen To: zohar@linux.ibm.com, stefanb@linux.ibm.com, roberto.sassu@huaweicloud.com, roberto.sassu@huawei.com, eric.snowberg@oracle.com, ebiederm@xmission.com, paul@paul-moore.com, code@tyhicks.com, bauermann@kolabnow.com, linux-integrity@vger.kernel.org, kexec@lists.infradead.org, linux-security-module@vger.kernel.org, linux-kernel@vger.kernel.org Cc: madvenka@linux.microsoft.com, nramas@linux.microsoft.com, James.Bottomley@HansenPartnership.com, bhe@redhat.com, vgoyal@redhat.com, dyoung@redhat.com Subject: [PATCH v9 2/7] kexec: define functions to map and unmap segments Date: Tue, 4 Mar 2025 11:03:46 -0800 Message-Id: <20250304190351.96975-3-chenste@linux.microsoft.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20250304190351.96975-1-chenste@linux.microsoft.com> References: <20250304190351.96975-1-chenste@linux.microsoft.com> Precedence: bulk X-Mailing-List: linux-security-module@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 The content of memory segments carried over to the new kernel during the kexec systemcall can be changed at kexec 'execute' stage, but the size of the memory segments cannot be changed at kexec 'execute' stage. To copy IMA measurement logs during the kexec operation, IMA needs to allocate memory at the kexec 'load' stage and map the segments to the kimage structure. The mapped address will then be used to copy IMA measurements during the kexec 'execute' stage. Currently, the mechanism to map and unmap segments to the kimage structure is not available to subsystems outside of kexec. Implement kimage_map_segment() to enable IMA to map measurement log list to the kimage structure during kexec 'load' stage. This function takes a kimage pointer, a memory address, and a size, then gathers the source pages within the specified address range, creates an array of page pointers, and maps these to a contiguous virtual address range. The function returns the start virtual address of this range if successful, or NULL on failure. Implement kimage_unmap_segment() for unmapping segments using vunmap(). From: Tushar Sugandhi Signed-off-by: Tushar Sugandhi Cc: Eric Biederman Cc: Baoquan He Cc: Vivek Goyal Cc: Dave Young Signed-off-by: steven chen --- include/linux/kexec.h | 6 +++++ kernel/kexec_core.c | 54 +++++++++++++++++++++++++++++++++++++++++++ 2 files changed, 60 insertions(+) diff --git a/include/linux/kexec.h b/include/linux/kexec.h index f0e9f8eda7a3..7d6b12f8b8d0 100644 --- a/include/linux/kexec.h +++ b/include/linux/kexec.h @@ -467,13 +467,19 @@ extern bool kexec_file_dbg_print; #define kexec_dprintk(fmt, arg...) \ do { if (kexec_file_dbg_print) pr_info(fmt, ##arg); } while (0) +extern void *kimage_map_segment(struct kimage *image, unsigned long addr, unsigned long size); +extern void kimage_unmap_segment(void *buffer); #else /* !CONFIG_KEXEC_CORE */ struct pt_regs; struct task_struct; +struct kimage; static inline void __crash_kexec(struct pt_regs *regs) { } static inline void crash_kexec(struct pt_regs *regs) { } static inline int kexec_should_crash(struct task_struct *p) { return 0; } static inline int kexec_crash_loaded(void) { return 0; } +static inline void *kimage_map_segment(struct kimage *image, unsigned long addr, unsigned long size) +{ return NULL; } +static inline void kimage_unmap_segment(void *buffer) { } #define kexec_in_progress false #endif /* CONFIG_KEXEC_CORE */ diff --git a/kernel/kexec_core.c b/kernel/kexec_core.c index c0bdc1686154..63e4d16b6023 100644 --- a/kernel/kexec_core.c +++ b/kernel/kexec_core.c @@ -867,6 +867,60 @@ int kimage_load_segment(struct kimage *image, return result; } +void *kimage_map_segment(struct kimage *image, + unsigned long addr, unsigned long size) +{ + unsigned long eaddr = addr + size; + unsigned long src_page_addr, dest_page_addr; + unsigned int npages; + struct page **src_pages; + int i; + kimage_entry_t *ptr, entry; + void *vaddr = NULL; + + /* + * Collect the source pages and map them in a contiguous VA range. + */ + npages = PFN_UP(eaddr) - PFN_DOWN(addr); + src_pages = kmalloc_array(npages, sizeof(*src_pages), GFP_KERNEL); + if (!src_pages) { + pr_err("Could not allocate ima pages array.\n"); + return NULL; + } + + i = 0; + for_each_kimage_entry(image, ptr, entry) { + if (entry & IND_DESTINATION) { + dest_page_addr = entry & PAGE_MASK; + } else if (entry & IND_SOURCE) { + if (dest_page_addr >= addr && dest_page_addr < eaddr) { + src_page_addr = entry & PAGE_MASK; + src_pages[i++] = + virt_to_page(__va(src_page_addr)); + if (i == npages) + break; + dest_page_addr += PAGE_SIZE; + } + } + } + + /* Sanity check. */ + WARN_ON(i < npages); + + vaddr = vmap(src_pages, npages, VM_MAP, PAGE_KERNEL); + kfree(src_pages); + + if (!vaddr) + pr_err("Could not map ima buffer.\n"); + + return vaddr; +} + +void kimage_unmap_segment(void *segment_buffer) +{ + vunmap(segment_buffer); +} + struct kexec_load_limit { /* Mutex protects the limit count. */ struct mutex mutex;