From patchwork Tue Aug 16 17:00:31 2016 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Thiago Jung Bauermann X-Patchwork-Id: 9284275 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id 387E060574 for ; Tue, 16 Aug 2016 17:00:47 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 272C52860F for ; Tue, 16 Aug 2016 17:00:47 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 1ADE028624; Tue, 16 Aug 2016 17:00:47 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-6.9 required=2.0 tests=BAYES_00,RCVD_IN_DNSWL_HI autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 8E3742860F for ; Tue, 16 Aug 2016 17:00:45 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1753419AbcHPRAo (ORCPT ); Tue, 16 Aug 2016 13:00:44 -0400 Received: from mx0a-001b2d01.pphosted.com ([148.163.156.1]:55239 "EHLO mx0a-001b2d01.pphosted.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1752174AbcHPRAn (ORCPT ); Tue, 16 Aug 2016 13:00:43 -0400 Received: from pps.filterd (m0098393.ppops.net [127.0.0.1]) by mx0a-001b2d01.pphosted.com (8.16.0.11/8.16.0.11) with SMTP id u7GGxVbM059797 for ; Tue, 16 Aug 2016 13:00:42 -0400 Received: from e24smtp05.br.ibm.com (e24smtp05.br.ibm.com [32.104.18.26]) by mx0a-001b2d01.pphosted.com with ESMTP id 24v3dg1uy9-1 (version=TLSv1.2 cipher=AES256-SHA bits=256 verify=NOT) for ; Tue, 16 Aug 2016 13:00:41 -0400 Received: from localhost by e24smtp05.br.ibm.com with IBM ESMTP SMTP Gateway: Authorized Use Only! Violators will be prosecuted for from ; Tue, 16 Aug 2016 14:00:39 -0300 Received: from d24dlp02.br.ibm.com (9.18.248.206) by e24smtp05.br.ibm.com (10.172.0.141) with IBM ESMTP SMTP Gateway: Authorized Use Only! Violators will be prosecuted; Tue, 16 Aug 2016 14:00:37 -0300 X-IBM-Helo: d24dlp02.br.ibm.com X-IBM-MailFrom: bauerman@linux.vnet.ibm.com X-IBM-RcptTo: linux-kernel@vger.kernel.org; linux-security-module@vger.kernel.org Received: from d24relay01.br.ibm.com (d24relay01.br.ibm.com [9.8.31.16]) by d24dlp02.br.ibm.com (Postfix) with ESMTP id 35AC01DC0051; Tue, 16 Aug 2016 13:00:27 -0400 (EDT) Received: from d24av03.br.ibm.com (d24av03.br.ibm.com [9.8.31.95]) by d24relay01.br.ibm.com (8.14.9/8.14.9/NCO v10.0) with ESMTP id u7GH0a0G4751544; Tue, 16 Aug 2016 14:00:36 -0300 Received: from d24av03.br.ibm.com (localhost [127.0.0.1]) by d24av03.br.ibm.com (8.14.4/8.14.4/NCO v10.0 AVout) with ESMTP id u7GH0X7a027894; Tue, 16 Aug 2016 14:00:36 -0300 Received: from hactar.localnet ([9.78.146.179]) by d24av03.br.ibm.com (8.14.4/8.14.4/NCO v10.0 AVin) with ESMTP id u7GH0WAP027772; Tue, 16 Aug 2016 14:00:33 -0300 From: Thiago Jung Bauermann To: Andrew Morton Cc: kexec@lists.infradead.org, linux-security-module@vger.kernel.org, linux-ima-devel@lists.sourceforge.net, linuxppc-dev@lists.ozlabs.org, linux-kernel@vger.kernel.org, x86@kernel.org, Eric Biederman , Dave Young , Vivek Goyal , Baoquan He , Michael Ellerman , Benjamin Herrenschmidt , Paul Mackerras , Stewart Smith , Samuel Mendoza-Jonas , Mimi Zohar , Eric Richter , Thomas Gleixner , Ingo Molnar , "H. Peter Anvin" , Petko Manolov , David Laight , Balbir Singh Subject: Re: [PATCH v2 4/6] kexec_file: Add mechanism to update kexec segments. Date: Tue, 16 Aug 2016 14:00:31 -0300 User-Agent: KMail/4.14.3 (Linux/3.13.0-93-generic; KDE/4.14.13; x86_64; ; ) In-Reply-To: <20160815152756.78ea7a61a3342547b9e694e5@linux-foundation.org> References: <1471058305-30198-1-git-send-email-bauerman@linux.vnet.ibm.com> <1471058305-30198-5-git-send-email-bauerman@linux.vnet.ibm.com> <20160815152756.78ea7a61a3342547b9e694e5@linux-foundation.org> MIME-Version: 1.0 X-TM-AS-MML: disable X-Content-Scanned: Fidelis XPS MAILER x-cbid: 16081617-0032-0000-0000-00000275E157 X-IBM-AV-DETECTION: SAVI=unused REMOTE=unused XFE=unused x-cbparentid: 16081617-0033-0000-0000-00000EBC30C2 Message-Id: <1734678.Uaq7DTajoE@hactar> X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10432:, , definitions=2016-08-16_10:, , signatures=0 X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 spamscore=0 suspectscore=0 malwarescore=0 phishscore=0 adultscore=0 bulkscore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.0.1-1604210000 definitions=main-1608160195 Sender: owner-linux-security-module@vger.kernel.org Precedence: bulk List-ID: X-Virus-Scanned: ClamAV using ClamSMTP Hello Andrew, Thank you for your review! Am Montag, 15 August 2016, 15:27:56 schrieb Andrew Morton: > On Sat, 13 Aug 2016 00:18:23 -0300 Thiago Jung Bauermann wrote: > > +/** > > + * kexec_update_segment - update the contents of a kimage segment > > + * @buffer: New contents of the segment. > > + * @bufsz: @buffer size. > > + * @load_addr: Segment's physical address in the next kernel. > > + * @memsz: Segment size. > > + * > > + * This function assumes kexec_mutex is held. > > + * > > + * Return: 0 on success, negative errno on error. > > + */ > > +int kexec_update_segment(const char *buffer, unsigned long bufsz, > > + unsigned long load_addr, unsigned long memsz) > > +{ > > + int i; > > + unsigned long entry; > > + unsigned long *ptr = NULL; > > + void *dest = NULL; > > + > > + if (kexec_image == NULL) { > > + pr_err("Can't update segment: no kexec image loaded.\n"); > > + return -EINVAL; > > + } > > + > > + /* > > + * kexec_add_buffer rounds up segment sizes to PAGE_SIZE, so > > + * we have to do it here as well. > > + */ > > + memsz = ALIGN(memsz, PAGE_SIZE); > > + > > + for (i = 0; i < kexec_image->nr_segments; i++) > > + /* We only support updating whole segments. */ > > + if (load_addr == kexec_image->segment[i].mem && > > + memsz == kexec_image->segment[i].memsz) { > > + if (kexec_image->segment[i].do_checksum) { > > + pr_err("Trying to update non-modifiable segment.\n"); > > + return -EINVAL; > > + } > > + > > + break; > > + } > > + if (i == kexec_image->nr_segments) { > > + pr_err("Couldn't find segment to update: 0x%lx, size 0x%lx\n", > > + load_addr, memsz); > > + return -EINVAL; > > + } > > + > > + for (entry = kexec_image->head; !(entry & IND_DONE) && memsz; > > + entry = *ptr++) { > > + void *addr = (void *) (entry & PAGE_MASK); > > + > > + switch (entry & IND_FLAGS) { > > + case IND_DESTINATION: > > + dest = addr; > > + break; > > + case IND_INDIRECTION: > > + ptr = __va(addr); > > + break; > > + case IND_SOURCE: > > + /* Shouldn't happen, but verify just to be safe. */ > > + if (dest == NULL) { > > + pr_err("Invalid kexec entries list."); > > + return -EINVAL; > > + } > > + > > + if (dest == (void *) load_addr) { > > + struct page *page; > > + char *ptr; > > + size_t uchunk, mchunk; > > + > > + page = kmap_to_page(addr); > > + > > + ptr = kmap(page); > > kmap_atomic() could be used here, and it is appreciably faster. Good idea. The patch below implements your suggestion. This has a consequence for patch 5/6 in this series, because it makes this code be used in the path of the kexec_file_load and kexec_load syscalls. In the latter case, there's a call to copy_from_user and thus kmap_atomic can't be used. I can change the patch to use kmap_atomic if state->from_kernel is true and kmap otherwise, but perhaps this is one more hint that patch 5/6 is not a very good idea after all. diff --git a/include/linux/kexec.h b/include/linux/kexec.h index 37eea32fdff1..14dda81e3e01 100644 --- a/include/linux/kexec.h +++ b/include/linux/kexec.h @@ -259,6 +259,8 @@ extern int kexec_purgatory_get_set_symbol(struct kimage *image, unsigned int size, bool get_value); extern void *kexec_purgatory_get_symbol_addr(struct kimage *image, const char *name); +int kexec_update_segment(const char *buffer, unsigned long bufsz, + unsigned long load_addr, unsigned long memsz); extern void __crash_kexec(struct pt_regs *); extern void crash_kexec(struct pt_regs *); int kexec_should_crash(struct task_struct *); diff --git a/kernel/kexec_core.c b/kernel/kexec_core.c index 561675589511..9782b292714e 100644 --- a/kernel/kexec_core.c +++ b/kernel/kexec_core.c @@ -721,6 +721,105 @@ static struct page *kimage_alloc_page(struct kimage *image, return page; } +/** + * kexec_update_segment - update the contents of a kimage segment + * @buffer: New contents of the segment. + * @bufsz: @buffer size. + * @load_addr: Segment's physical address in the next kernel. + * @memsz: Segment size. + * + * This function assumes kexec_mutex is held. + * + * Return: 0 on success, negative errno on error. + */ +int kexec_update_segment(const char *buffer, unsigned long bufsz, + unsigned long load_addr, unsigned long memsz) +{ + int i; + unsigned long entry; + unsigned long *ptr = NULL; + void *dest = NULL; + + if (kexec_image == NULL) { + pr_err("Can't update segment: no kexec image loaded.\n"); + return -EINVAL; + } + + /* + * kexec_add_buffer rounds up segment sizes to PAGE_SIZE, so + * we have to do it here as well. + */ + memsz = ALIGN(memsz, PAGE_SIZE); + + for (i = 0; i < kexec_image->nr_segments; i++) + /* We only support updating whole segments. */ + if (load_addr == kexec_image->segment[i].mem && + memsz == kexec_image->segment[i].memsz) { + if (kexec_image->segment[i].do_checksum) { + pr_err("Trying to update non-modifiable segment.\n"); + return -EINVAL; + } + + break; + } + if (i == kexec_image->nr_segments) { + pr_err("Couldn't find segment to update: 0x%lx, size 0x%lx\n", + load_addr, memsz); + return -EINVAL; + } + + for (entry = kexec_image->head; !(entry & IND_DONE) && memsz; + entry = *ptr++) { + void *addr = (void *) (entry & PAGE_MASK); + + switch (entry & IND_FLAGS) { + case IND_DESTINATION: + dest = addr; + break; + case IND_INDIRECTION: + ptr = __va(addr); + break; + case IND_SOURCE: + /* Shouldn't happen, but verify just to be safe. */ + if (dest == NULL) { + pr_err("Invalid kexec entries list."); + return -EINVAL; + } + + if (dest == (void *) load_addr) { + struct page *page; + char *ptr; + size_t uchunk, mchunk; + + page = kmap_to_page(addr); + + ptr = kmap_atomic(page); + ptr += load_addr & ~PAGE_MASK; + mchunk = min_t(size_t, memsz, + PAGE_SIZE - (load_addr & ~PAGE_MASK)); + uchunk = min(bufsz, mchunk); + memcpy(ptr, buffer, uchunk); + + kunmap_atomic(ptr); + + bufsz -= uchunk; + load_addr += mchunk; + buffer += mchunk; + memsz -= mchunk; + } + dest += PAGE_SIZE; + } + + /* Shouldn't happen, but verify just to be safe. */ + if (ptr == NULL) { + pr_err("Invalid kexec entries list."); + return -EINVAL; + } + } + + return 0; +} + static int kimage_load_normal_segment(struct kimage *image, struct kexec_segment *segment) {