From patchwork Fri Dec 3 10:42:29 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Amit Daniel Kachhap X-Patchwork-Id: 12654801 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 6C393C433EF for ; Fri, 3 Dec 2021 10:43:52 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1380140AbhLCKrO (ORCPT ); Fri, 3 Dec 2021 05:47:14 -0500 Received: from foss.arm.com ([217.140.110.172]:47098 "EHLO foss.arm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1380165AbhLCKrE (ORCPT ); Fri, 3 Dec 2021 05:47:04 -0500 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 8492815BE; Fri, 3 Dec 2021 02:43:40 -0800 (PST) Received: from a077416.arm.com (unknown [10.163.33.180]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id 90A9A3F5A1; Fri, 3 Dec 2021 02:43:37 -0800 (PST) From: Amit Daniel Kachhap To: linux-kernel@vger.kernel.org Cc: Christoph Hellwig , Vincenzo Frascino , Kevin Brodsky , linux-fsdevel , kexec , Amit Daniel Kachhap , linux-ia64 Subject: [RFC PATCH 12/14] ia64/crash_dump: Use the new interface copy_oldmem_page_buf Date: Fri, 3 Dec 2021 16:12:29 +0530 Message-Id: <20211203104231.17597-13-amit.kachhap@arm.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20211203104231.17597-1-amit.kachhap@arm.com> References: <20211203104231.17597-1-amit.kachhap@arm.com> Precedence: bulk List-ID: X-Mailing-List: linux-fsdevel@vger.kernel.org The current interface copy_oldmem_page() passes user pointer without __user annotation and hence does unnecessary user/kernel pointer conversions during its implementation. Use the interface copy_oldmem_page_buf() to avoid this issue. Cc: linux-ia64 Signed-off-by: Amit Daniel Kachhap --- arch/ia64/kernel/crash_dump.c | 25 +++++++++++++------------ 1 file changed, 13 insertions(+), 12 deletions(-) diff --git a/arch/ia64/kernel/crash_dump.c b/arch/ia64/kernel/crash_dump.c index 0ed3c3dee4cd..1aea8dbe06de 100644 --- a/arch/ia64/kernel/crash_dump.c +++ b/arch/ia64/kernel/crash_dump.c @@ -15,37 +15,38 @@ #include /** - * copy_oldmem_page - copy one page from "oldmem" + * copy_oldmem_page_buf - copy one page from "oldmem" * @pfn: page frame number to be copied - * @buf: target memory address for the copy; this can be in kernel address - * space or user address space (see @userbuf) + * @ubuf: target user memory pointer for the copy; use copy_to_user() if this + * pointer is not NULL + * @kbuf: target kernel memory pointer for the copy; use memcpy() if this + * pointer is not NULL * @csize: number of bytes to copy * @offset: offset in bytes into the page (based on pfn) to begin the copy - * @userbuf: if set, @buf is in user address space, use copy_to_user(), - * otherwise @buf is in kernel address space, use memcpy(). * - * Copy a page from "oldmem". For this page, there is no pte mapped - * in the current kernel. We stitch up a pte, similar to kmap_atomic. + * Copy a page from "oldmem" into the buffer pointed by either @ubuf or @kbuf. + * For this page, there is no pte mapped in the current kernel. We stitch up a + * pte, similar to kmap_atomic. * * Calling copy_to_user() in atomic context is not desirable. Hence first * copying the data to a pre-allocated kernel page and then copying to user * space in non-atomic context. */ ssize_t -copy_oldmem_page(unsigned long pfn, char *buf, - size_t csize, unsigned long offset, int userbuf) +copy_oldmem_page_buf(unsigned long pfn, char __user *ubuf, char *kbuf, + size_t csize, unsigned long offset) { void *vaddr; if (!csize) return 0; vaddr = __va(pfn<