From patchwork Tue Aug 30 17:45:01 2016 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Thiago Jung Bauermann X-Patchwork-Id: 9305837 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id 6DD1660756 for ; Tue, 30 Aug 2016 17:48:46 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 624A528CE9 for ; Tue, 30 Aug 2016 17:48:46 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 5713128CEB; Tue, 30 Aug 2016 17:48:46 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-6.9 required=2.0 tests=BAYES_00,RCVD_IN_DNSWL_HI autolearn=unavailable version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id E77F028CE9 for ; Tue, 30 Aug 2016 17:48:45 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S933651AbcH3Rsn (ORCPT ); Tue, 30 Aug 2016 13:48:43 -0400 Received: from mx0b-001b2d01.pphosted.com ([148.163.158.5]:33883 "EHLO mx0a-001b2d01.pphosted.com" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S933612AbcH3RpV (ORCPT ); Tue, 30 Aug 2016 13:45:21 -0400 Received: from pps.filterd (m0098414.ppops.net [127.0.0.1]) by mx0b-001b2d01.pphosted.com (8.16.0.11/8.16.0.11) with SMTP id u7UHheYa059737 for ; Tue, 30 Aug 2016 13:45:20 -0400 Received: from e24smtp02.br.ibm.com (e24smtp02.br.ibm.com [32.104.18.86]) by mx0b-001b2d01.pphosted.com with ESMTP id 255368hy5f-1 (version=TLSv1.2 cipher=AES256-SHA bits=256 verify=NOT) for ; Tue, 30 Aug 2016 13:45:20 -0400 Received: from localhost by e24smtp02.br.ibm.com with IBM ESMTP SMTP Gateway: Authorized Use Only! Violators will be prosecuted for from ; Tue, 30 Aug 2016 14:45:18 -0300 Received: from d24dlp02.br.ibm.com (9.18.248.206) by e24smtp02.br.ibm.com (10.172.0.142) with IBM ESMTP SMTP Gateway: Authorized Use Only! Violators will be prosecuted; Tue, 30 Aug 2016 14:45:15 -0300 X-IBM-Helo: d24dlp02.br.ibm.com X-IBM-MailFrom: bauerman@linux.vnet.ibm.com X-IBM-RcptTo: linux-kernel@vger.kernel.org; linux-security-module@vger.kernel.org Received: from d24relay01.br.ibm.com (d24relay01.br.ibm.com [9.8.31.16]) by d24dlp02.br.ibm.com (Postfix) with ESMTP id 22BFD1DC0071; Tue, 30 Aug 2016 13:45:05 -0400 (EDT) Received: from d24av05.br.ibm.com (d24av05.br.ibm.com [9.18.232.44]) by d24relay01.br.ibm.com (8.14.9/8.14.9/NCO v10.0) with ESMTP id u7UHjEEp3887324; Tue, 30 Aug 2016 14:45:14 -0300 Received: from d24av05.br.ibm.com (localhost [127.0.0.1]) by d24av05.br.ibm.com (8.14.4/8.14.4/NCO v10.0 AVout) with ESMTP id u7UHjD7w015564; Tue, 30 Aug 2016 14:45:14 -0300 Received: from hactar.ibm.com (lcfurno.br.ibm.com [9.18.203.1] (may be forged)) by d24av05.br.ibm.com (8.14.4/8.14.4/NCO v10.0 AVin) with ESMTP id u7UHjAVY015449; Tue, 30 Aug 2016 14:45:11 -0300 From: Thiago Jung Bauermann To: kexec@lists.infradead.org Cc: linux-security-module@vger.kernel.org, linux-ima-devel@lists.sourceforge.net, linuxppc-dev@lists.ozlabs.org, linux-kernel@vger.kernel.org, Eric Biederman , Dave Young , Vivek Goyal , Baoquan He , Michael Ellerman , Stewart Smith , Mimi Zohar , Eric Richter , Andrew Morton , Balbir Singh , Thiago Jung Bauermann Subject: [PATCH v4 1/5] kexec_file: Add buffer hand-over support for the next kernel Date: Tue, 30 Aug 2016 14:45:01 -0300 X-Mailer: git-send-email 1.9.1 In-Reply-To: <1472579105-26296-1-git-send-email-bauerman@linux.vnet.ibm.com> References: <1472579105-26296-1-git-send-email-bauerman@linux.vnet.ibm.com> X-TM-AS-MML: disable X-Content-Scanned: Fidelis XPS MAILER x-cbid: 16083017-0020-0000-0000-00000236AF60 X-IBM-AV-DETECTION: SAVI=unused REMOTE=unused XFE=unused x-cbparentid: 16083017-0021-0000-0000-000030276342 Message-Id: <1472579105-26296-2-git-send-email-bauerman@linux.vnet.ibm.com> X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10432:, , definitions=2016-08-30_07:, , signatures=0 X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 spamscore=0 suspectscore=3 malwarescore=0 phishscore=0 adultscore=0 bulkscore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.0.1-1604210000 definitions=main-1608300169 Sender: owner-linux-security-module@vger.kernel.org Precedence: bulk List-ID: X-Virus-Scanned: ClamAV using ClamSMTP The buffer hand-over mechanism allows the currently running kernel to pass data to kernel that will be kexec'd via a kexec segment. The second kernel can check whether the previous kernel sent data and retrieve it. This is the architecture-independent part of the feature. Signed-off-by: Thiago Jung Bauermann --- include/linux/kexec.h | 31 +++++++++++++++++++++++ kernel/kexec_file.c | 68 +++++++++++++++++++++++++++++++++++++++++++++++++++ 2 files changed, 99 insertions(+) diff --git a/include/linux/kexec.h b/include/linux/kexec.h index d419d0e51fe5..16561e96a6d7 100644 --- a/include/linux/kexec.h +++ b/include/linux/kexec.h @@ -383,6 +383,37 @@ static inline void *boot_phys_to_virt(unsigned long entry) return phys_to_virt(boot_phys_to_phys(entry)); } +#ifdef CONFIG_KEXEC_FILE +bool __weak kexec_can_hand_over_buffer(void); +int __weak arch_kexec_add_handover_buffer(struct kimage *image, + unsigned long load_addr, + unsigned long size); +int kexec_add_handover_buffer(struct kexec_buf *kbuf); +int __weak kexec_get_handover_buffer(void **addr, unsigned long *size); +int __weak kexec_free_handover_buffer(void); +#else +struct kexec_buf; + +static inline bool kexec_can_hand_over_buffer(void) +{ + return false; +} + +static inline int kexec_add_handover_buffer(struct kexec_buf *kbuf) +{ + return -ENOTSUPP; +} + +static inline int kexec_get_handover_buffer(void **addr, unsigned long *size) +{ + return -ENOTSUPP; +} + +static inline int kexec_free_handover_buffer(void) +{ + return -ENOTSUPP; +} +#endif /* CONFIG_KEXEC_FILE */ #else /* !CONFIG_KEXEC_CORE */ struct pt_regs; struct task_struct; diff --git a/kernel/kexec_file.c b/kernel/kexec_file.c index 3401816700f3..f5684adfad07 100644 --- a/kernel/kexec_file.c +++ b/kernel/kexec_file.c @@ -113,6 +113,74 @@ void kimage_file_post_load_cleanup(struct kimage *image) image->image_loader_data = NULL; } +/** + * kexec_can_hand_over_buffer - can we pass data to the kexec'd kernel? + */ +bool __weak kexec_can_hand_over_buffer(void) +{ + return false; +} + +/** + * arch_kexec_add_handover_buffer - do arch-specific steps to handover buffer + * + * Architectures should use this function to pass on the handover buffer + * information to the next kernel. + * + * Return: 0 on success, negative errno on error. + */ +int __weak arch_kexec_add_handover_buffer(struct kimage *image, + unsigned long load_addr, + unsigned long size) +{ + return -ENOTSUPP; +} + +/** + * kexec_add_handover_buffer - add buffer to be used by the next kernel + * @kbuf: Buffer contents and memory parameters. + * + * This function assumes that kexec_mutex is held. + * On successful return, @kbuf->mem will have the physical address of + * the buffer in the next kernel. + * + * Return: 0 on success, negative errno on error. + */ +int kexec_add_handover_buffer(struct kexec_buf *kbuf) +{ + int ret; + + if (!kexec_can_hand_over_buffer()) + return -ENOTSUPP; + + ret = kexec_add_buffer(kbuf); + if (ret) + return ret; + + return arch_kexec_add_handover_buffer(kbuf->image, kbuf->mem, + kbuf->memsz); +} + +/** + * kexec_get_handover_buffer - get the handover buffer from the previous kernel + * @addr: On successful return, set to point to the buffer contents. + * @size: On successful return, set to the buffer size. + * + * Return: 0 on success, negative errno on error. + */ +int __weak kexec_get_handover_buffer(void **addr, unsigned long *size) +{ + return -ENOTSUPP; +} + +/** + * kexec_free_handover_buffer - free memory used by the handover buffer + */ +int __weak kexec_free_handover_buffer(void) +{ + return -ENOTSUPP; +} + /* * In file mode list of segments is prepared by kernel. Copy relevant * data from user space, do error checking, prepare segment list