From patchwork Thu Aug 3 21:23:39 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Pavel Tatashin X-Patchwork-Id: 9879849 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id 0ADD6603F4 for ; Thu, 3 Aug 2017 21:27:25 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 102E82896D for ; Thu, 3 Aug 2017 21:27:25 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 049A32896C; Thu, 3 Aug 2017 21:27:25 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-1.4 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,RCVD_IN_DNSWL_NONE,RCVD_IN_SORBS_SPAM,UNPARSEABLE_RELAY autolearn=no version=3.3.1 Received: from bombadil.infradead.org (bombadil.infradead.org [65.50.211.133]) (using TLSv1.2 with cipher AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id 54E0628929 for ; Thu, 3 Aug 2017 21:27:24 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20170209; h=Sender: Content-Transfer-Encoding:Content-Type:MIME-Version:List-Subscribe:List-Help: List-Post:List-Archive:List-Unsubscribe:List-Id:References:In-Reply-To: Message-Id:Date:Subject:To:From:Reply-To:Cc:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=8sm34FZoMzsjlHbkRHtNVrjPBpDDG93w6TMUCSvaGu0=; b=qC5XxiLftKY1ho nUJpNoLcu1+dpHm0ANJ34KRf1+FrXEh1orX0D8KxRc6MR/PGKCw3deyxHZTNB21qfGYgN+BKCjiWc Qyx7TMSAKcsg16H96O1cmU31XN4ycygux5n5e4I5UvFRXrNihBWcruRlLJt+fnXO7k5LCL4+oQj2t HuoCMW9UaI1XcSlLNfDwRJ2pfGWRt5yHcnhkZxzV7FEekhx+vsLxMi6CmgH6iR/48IyETCPS9ys9l LlOpd1Slt1lmkmFcfEcjlAywGLNYF6AlTAj1R2qiDiLX2l4n1gBXCqgo6AdGO86nR5nFZm1MDPD3i 6y2kW3R9zV0S7Hat8rVg==; Received: from localhost ([127.0.0.1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.87 #1 (Red Hat Linux)) id 1ddNeC-00076I-6F; Thu, 03 Aug 2017 21:27:12 +0000 Received: from merlin.infradead.org ([2001:8b0:10b:1231::1]) by bombadil.infradead.org with esmtps (Exim 4.87 #1 (Red Hat Linux)) id 1ddNcX-0005RA-D5 for linux-arm-kernel@bombadil.infradead.org; Thu, 03 Aug 2017 21:25:29 +0000 DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=merlin.20170209; h=References:In-Reply-To:Message-Id:Date: Subject:To:From:Sender:Reply-To:Cc:MIME-Version:Content-Type: Content-Transfer-Encoding:Content-ID:Content-Description:Resent-Date: Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Id: List-Help:List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive; bh=JA0yW+HO1xMB/aS5dawBk7Ug7ehGgfL0P3U0Cgj2hrQ=; b=haRRReNnCcbvpH6Ejry6Trm+A KulkvsU5wKuIH4ppGf0/AQ/lpg2hq2rC77g5uP2S17Q4yhgoJOLB8MPh0oPfKX66tfGm38fHxal8r fVe0LfIF36BLrA/yJWILytbWUR6Vr8bUQjMMIh/yCte30iOyM5pSh2CPAR/VxkkD1/rLOZ6zawKxZ EsKoQu+lDzBWNC1NaDKHdXPx12guh9NdTdMqcF2y1V5gyb6ETEHuRSeb35JKE8wdxRcEK/oOTvLFn R12tCumVYL/lfWnd/DLW9c2rZnDOY/NIxMQ89z2tJ2sCQGBTXg83e2JHWaTFa/ImeV8PeDKQlP+mD 6083PTUYw==; Received: from aserp1040.oracle.com ([141.146.126.69]) by merlin.infradead.org with esmtps (Exim 4.87 #1 (Red Hat Linux)) id 1ddNcU-0007SF-5g for linux-arm-kernel@lists.infradead.org; Thu, 03 Aug 2017 21:25:27 +0000 Received: from userv0021.oracle.com (userv0021.oracle.com [156.151.31.71]) by aserp1040.oracle.com (Sentrion-MTA-4.3.2/Sentrion-MTA-4.3.2) with ESMTP id v73LO79o002395 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK); Thu, 3 Aug 2017 21:24:07 GMT Received: from userv0121.oracle.com (userv0121.oracle.com [156.151.31.72]) by userv0021.oracle.com (8.14.4/8.14.4) with ESMTP id v73LO6UC022653 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK); Thu, 3 Aug 2017 21:24:07 GMT Received: from abhmp0004.oracle.com (abhmp0004.oracle.com [141.146.116.10]) by userv0121.oracle.com (8.14.4/8.13.8) with ESMTP id v73LO3sL026723; Thu, 3 Aug 2017 21:24:04 GMT Received: from ca-ldom-ol-build-1.us.oracle.com (/10.129.68.23) by default (Oracle Beehive Gateway v4.0) with ESMTP ; Thu, 03 Aug 2017 14:24:03 -0700 From: Pavel Tatashin To: linux-kernel@vger.kernel.org, sparclinux@vger.kernel.org, linux-mm@kvack.org, linuxppc-dev@lists.ozlabs.org, linux-s390@vger.kernel.org, linux-arm-kernel@lists.infradead.org, x86@kernel.org, kasan-dev@googlegroups.com, borntraeger@de.ibm.com, heiko.carstens@de.ibm.com, davem@davemloft.net, willy@infradead.org, mhocko@kernel.org Subject: [v5 01/15] x86/mm: reserve only exiting low pages Date: Thu, 3 Aug 2017 17:23:39 -0400 Message-Id: <1501795433-982645-2-git-send-email-pasha.tatashin@oracle.com> X-Mailer: git-send-email 1.7.1 In-Reply-To: <1501795433-982645-1-git-send-email-pasha.tatashin@oracle.com> References: <1501795433-982645-1-git-send-email-pasha.tatashin@oracle.com> X-Source-IP: userv0021.oracle.com [156.151.31.71] X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.21 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , MIME-Version: 1.0 Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org X-Virus-Scanned: ClamAV using ClamSMTP Struct pages are initialized by going through __init_single_page(). Since the existing physical memory in memblock is represented in memblock.memory list, struct page for every page from this list goes through __init_single_page(). The second memblock list: memblock.reserved, manages the allocated memory. The memory that won't be available to kernel allocator. So, every page from this list goes through reserve_bootmem_region(), where certain struct page fields are set, the assumption being that the struct pages have been initialized beforehand. In trim_low_memory_range() we unconditionally reserve memoryfrom PFN 0, but memblock.memory might start at a later PFN. For example, in QEMU, e820__memblock_setup() can use PFN 1 as the first PFN in memblock.memory, so PFN 0 is not on memblock.memory (and hence isn't initialized via __init_single_page) but is on memblock.reserved (and hence we set fields in the uninitialized struct page). Currently, the struct page memory is always zeroed during allocation, which prevents this problem from being detected. But, if some asserts provided by CONFIG_DEBUG_VM_PGFLAGS are tighten, this problem may become visible in existing kernels. In this patchset we will stop zeroing struct page memory during allocation. Therefore, this bug must be fixed in order to avoid random assert failures caused by CONFIG_DEBUG_VM_PGFLAGS triggers. The fix is to reserve memory from the first existing PFN. Signed-off-by: Pavel Tatashin Reviewed-by: Steven Sistare Reviewed-by: Daniel Jordan Reviewed-by: Bob Picco --- arch/x86/kernel/setup.c | 5 ++++- 1 file changed, 4 insertions(+), 1 deletion(-) diff --git a/arch/x86/kernel/setup.c b/arch/x86/kernel/setup.c index 3486d0498800..489cdc141bcb 100644 --- a/arch/x86/kernel/setup.c +++ b/arch/x86/kernel/setup.c @@ -790,7 +790,10 @@ early_param("reservelow", parse_reservelow); static void __init trim_low_memory_range(void) { - memblock_reserve(0, ALIGN(reserve_low, PAGE_SIZE)); + unsigned long min_pfn = find_min_pfn_with_active_regions(); + phys_addr_t base = min_pfn << PAGE_SHIFT; + + memblock_reserve(base, ALIGN(reserve_low, PAGE_SIZE)); } /*