From patchwork Mon Oct 14 14:44:59 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Christoph Hellwig X-Patchwork-Id: 13835264 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 5B5BCD1812B for ; Mon, 14 Oct 2024 15:36:40 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-ID:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=yQAnmjcsLGjxj5oKsoYwbzUQuLMf1U/qvWnMv5R6b4k=; b=ifCpPt9dRqVMia OlbrMMyx+UUQOd5DQ3yMX+4XBwbDcfPbAakw1ble3K7Oqp/E6gVsKzx83Cou8mdSEiAWmzdyor2pj rM/C0WclNrFSIp8REMvQI8CSxPTE7p5MrCX4KDH4Jy5RyPERhUdJPIP/jbQPoHrYGi0Z57GqRsHTw bBjXeLhIA1z+tjUPWWvQ2RxqceAK6maO5JcvSoy53/I5JkrGQLE4ipfFLrllkV6u7/cb+Xzu4rlz4 tKXB3jMwvdIL9l/lJIgP1rgxiimXMaO/6etG5DJ43PhkzznWv7pQ+N/o+ezll/jagDByQccaNXZ5A V2X4INb6VsPMzD8PlsRw==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.98 #2 (Red Hat Linux)) id 1t0N7a-00000005hBN-1wIg; Mon, 14 Oct 2024 15:36:34 +0000 Received: from 2a02-8389-2341-5b80-350d-7b06-b28a-173d.cable.dynamic.v6.surfer.at ([2a02:8389:2341:5b80:350d:7b06:b28a:173d] helo=localhost) by bombadil.infradead.org with esmtpsa (Exim 4.98 #2 (Red Hat Linux)) id 1t0MJz-00000005Wzd-3wyi; Mon, 14 Oct 2024 14:45:20 +0000 From: Christoph Hellwig To: Arnd Bergmann Cc: linux-alpha@vger.kernel.org, linux-kernel@vger.kernel.org, linux-snps-arc@lists.infradead.org, linux-arm-kernel@lists.infradead.org, linux-csky@vger.kernel.org, linux-hexagon@vger.kernel.org, loongarch@lists.linux.dev, linux-m68k@lists.linux-m68k.org, linux-mips@vger.kernel.org, linux-openrisc@vger.kernel.org, linux-parisc@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, linux-riscv@lists.infradead.org, linux-s390@vger.kernel.org, linux-sh@vger.kernel.org, sparclinux@vger.kernel.org, linux-um@lists.infradead.org, linux-arch@vger.kernel.org Subject: [PATCH 2/2] asm-generic: add an optional pfn_valid check to pfn_valid Date: Mon, 14 Oct 2024 16:44:59 +0200 Message-ID: <20241014144506.51754-3-hch@lst.de> X-Mailer: git-send-email 2.45.2 In-Reply-To: <20241014144506.51754-1-hch@lst.de> References: <20241014144506.51754-1-hch@lst.de> MIME-Version: 1.0 X-BeenThere: linux-riscv@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-riscv" Errors-To: linux-riscv-bounces+linux-riscv=archiver.kernel.org@lists.infradead.org page_to_pfn is usually implemented by pointer arithmetics on the memory map, which means that bogus input can lead to even more bogus output. Powerpc had a pfn_valid check on the regult to it's page_to_phys implementation when CONFIG_DEBUG_VIRTUAL is defined, which seems generally useful, so add that to the generic version. Signed-off-by: Christoph Hellwig Reviewed-by: Thomas Huth --- include/asm-generic/memory_model.h | 10 ++++++++++ 1 file changed, 10 insertions(+) diff --git a/include/asm-generic/memory_model.h b/include/asm-generic/memory_model.h index a73a140cbecdd7..6d1fb6162ac1a6 100644 --- a/include/asm-generic/memory_model.h +++ b/include/asm-generic/memory_model.h @@ -64,7 +64,17 @@ static inline int pfn_valid(unsigned long pfn) #define page_to_pfn __page_to_pfn #define pfn_to_page __pfn_to_page +#ifdef CONFIG_DEBUG_VIRTUAL +#define page_to_phys(page) \ +({ \ + unsigned long __pfn = page_to_pfn(page); \ + \ + WARN_ON_ONCE(!pfn_valid(__pfn)); \ + PFN_PHYS(__pfn); \ +}) +#else #define page_to_phys(page) PFN_PHYS(page_to_pfn(page)) +#endif /* CONFIG_DEBUG_VIRTUAL */ #define phys_to_page(phys) pfn_to_page(PHYS_PFN(phys)) #endif /* __ASSEMBLY__ */