From patchwork Thu May 9 04:46:15 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Anshuman Khandual X-Patchwork-Id: 10936619 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 2D78B76 for ; Thu, 9 May 2019 04:47:04 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 1B1C1288FA for ; Thu, 9 May 2019 04:47:04 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 0F38B28908; Thu, 9 May 2019 04:47:04 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-5.2 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,MAILING_LIST_MULTI,RCVD_IN_DNSWL_MED autolearn=ham version=3.3.1 Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id A914E288FA for ; Thu, 9 May 2019 04:47:03 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20170209; h=Sender: Content-Transfer-Encoding:Content-Type:MIME-Version:Cc:List-Subscribe: List-Help:List-Post:List-Archive:List-Unsubscribe:List-Id:Message-Id:Date: Subject:To:From:Reply-To:Content-ID:Content-Description:Resent-Date: Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:In-Reply-To: References:List-Owner; bh=59GIqWNNanXC9HKz0cxmmjq3WNtBw2u9kD0n+JPj2q8=; b=oUt 3IBHhJYluzFE15CE/sSS/8TFsaJxTOM2ZAaXWv+zonZUOe8ZEylIVElqcCpYXmu9Z245vtfIgtNXa cUtP6eJ71vOch5Tvtm6cuICSH8PGskCloLrbHKkIs0Nb6x87wmsVnv9HBY3WCMppeKiJ4Gafl5rQ2 rivJ6gpuT4dBcKtEhR8fjjJYWF0ZI7Q96ikvgsV6fWJvynDcz7VjCo3d8DB5FB9v9xftJD5JiWNBj iqBey7LzREa7nc+yLkrKLGvu5SzSz6JCuI6eNVD43AvTiEjHnDje1nUGiRJYQTZUpEgDf+V+ClYU9 WNyanTAI/INQa2Emiwv2dyTPK0tA1Aw==; Received: from localhost ([127.0.0.1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.90_1 #2 (Red Hat Linux)) id 1hOaxS-0004rI-2p; Thu, 09 May 2019 04:47:02 +0000 Received: from usa-sjc-mx-foss1.foss.arm.com ([217.140.101.70] helo=foss.arm.com) by bombadil.infradead.org with esmtp (Exim 4.90_1 #2 (Red Hat Linux)) id 1hOaxO-0004q5-ET for linux-arm-kernel@lists.infradead.org; Thu, 09 May 2019 04:46:59 +0000 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.72.51.249]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id DF70C374; Wed, 8 May 2019 21:46:54 -0700 (PDT) Received: from p8cg001049571a15.arm.com (unknown [10.163.1.46]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id BB8733F575; Wed, 8 May 2019 21:46:47 -0700 (PDT) From: Anshuman Khandual To: linux-mm@kvack.org, linux-arm-kernel@lists.infradead.org Subject: [PATCH V3 0/2] mm/ioremap: Check virtual address alignment Date: Thu, 9 May 2019 10:16:15 +0530 Message-Id: <1557377177-20695-1-git-send-email-anshuman.khandual@arm.com> X-Mailer: git-send-email 2.7.4 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20190508_214658_492565_A887C65E X-CRM114-Status: GOOD ( 10.73 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.21 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Mark Rutland , Toshi Kani , Anshuman Khandual , Catalin Marinas , Will Deacon , James Morse , Chintan Pandya , Andrew Morton , Laura Abbott , Robin Murphy , Thomas Gleixner MIME-Version: 1.0 Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org X-Virus-Scanned: ClamAV using ClamSMTP This series makes sure that ioremap_page_range()'s input virtual address alignment is checked along with physical address before creating huge page kernel mappings to avoid problems related to random freeing of PMD or PTE pgtable pages potentially with existing valid entries. It also cleans up arm64 pgtable page address offset in [pud|pmd]_free_[pmd|pte]_page(). Changes in V3: - Added virtual address alignment check in ioremap_page_range() - Dropped VM_WARN_ONCE() as input virtual addresses are aligned for sure Changes in V2: (https://patchwork.kernel.org/patch/10922795/) - Replaced WARN_ON_ONCE() with VM_WARN_ONCE() as per Catalin Changes in V1: (https://patchwork.kernel.org/patch/10921135/) Cc: Andrew Morton Cc: Will Deacon Cc: Toshi Kani Cc: Thomas Gleixner Cc: Catalin Marinas Cc: Mark Rutland Cc: James Morse Cc: Chintan Pandya Cc: Robin Murphy Cc: Laura Abbott Anshuman Khandual (2): mm/ioremap: Check virtual address alignment while creating huge mappings arm64/mm: Change offset base address in [pud|pmd]_free_[pmd|pte]_page() arch/arm64/mm/mmu.c | 6 +++--- lib/ioremap.c | 6 ++++++ 2 files changed, 9 insertions(+), 3 deletions(-)