From patchwork Wed Sep 21 17:28:48 2016 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Laura Abbott X-Patchwork-Id: 9343989 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id D26C76077A for ; Wed, 21 Sep 2016 17:30:39 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id BDA802A0A5 for ; Wed, 21 Sep 2016 17:30:39 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id AD44D2A746; Wed, 21 Sep 2016 17:30:39 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-3.7 required=2.0 tests=BAYES_00, RCVD_IN_DNSWL_MED, RCVD_IN_SORBS_SPAM autolearn=ham version=3.3.1 Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.9]) (using TLSv1.2 with cipher AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id CD0922A0A5 for ; Wed, 21 Sep 2016 17:30:34 +0000 (UTC) Received: from localhost ([127.0.0.1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.85_2 #1 (Red Hat Linux)) id 1bmlKk-0005Nx-1k; Wed, 21 Sep 2016 17:29:22 +0000 Received: from mail-yw0-f180.google.com ([209.85.161.180]) by bombadil.infradead.org with esmtps (Exim 4.85_2 #1 (Red Hat Linux)) id 1bmlKc-0005JQ-HM for linux-arm-kernel@lists.infradead.org; Wed, 21 Sep 2016 17:29:15 +0000 Received: by mail-yw0-f180.google.com with SMTP id g192so65339261ywh.1 for ; Wed, 21 Sep 2016 10:28:54 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:from:to:cc:subject:date:message-id; bh=q1SAA+GA8v/MyjjWwyOzg7jhtzyzLnLpMEOnZSLiMGg=; b=jdQD5kilHcuOx34MuN6STfsMgOuLqx1CM6TvH6eMwJRhodsS4Dt1gvwG2OmF1olKUb /tvDe4ofHR4V2b1LPgj/XZArUmeF5UulR2st+kr3vmIxmo4mteVnZQKnnQ2TNe2PswqO oJK6mapDlRxXGKjm4gw7SNYZN48vaBg0z6YjNDQ9gQRcNrYV+hanSuaS1BeSBI9QU9Au /ebncWJObmGii9aqBJ723J1cv88WRyDJzmkUT0MeuvpuTBR0+V6gWaXls/BGtJBf14NT 3RloDrrPXClpCCU5oWksHMcUG0V3g9IMimBOEG8EIjTV4z1kllBi8IVswtDuTMOkKd+l 5JNw== X-Gm-Message-State: AE9vXwOjhP4sj0hUmF8eVzgg2a6y8F3zMUUyeB98iXUQgNYCrzNER6rmc2sZO3swIBn2htkJ X-Received: by 10.129.119.4 with SMTP id s4mr26712563ywc.329.1474478933436; Wed, 21 Sep 2016 10:28:53 -0700 (PDT) Received: from labbott-redhat-machine.redhat.com ([2601:602:9800:177f::2946]) by smtp.gmail.com with ESMTPSA id p203sm14067315ywb.14.2016.09.21.10.28.51 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Wed, 21 Sep 2016 10:28:52 -0700 (PDT) From: Laura Abbott To: Catalin Marinas , Will Deacon , Ard Biesheuvel , Mark Rutland Subject: [PATCH] arm64: Correctly bounds check virt_addr_valid Date: Wed, 21 Sep 2016 10:28:48 -0700 Message-Id: <1474478928-25022-1-git-send-email-labbott@redhat.com> X-Mailer: git-send-email 2.7.4 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20160921_102914_699916_1E4F59CB X-CRM114-Status: GOOD ( 10.33 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.20 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Laura Abbott , Kees Cook , linux-arm-kernel@lists.infradead.org, linux-kernel@vger.kernel.org MIME-Version: 1.0 Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org X-Virus-Scanned: ClamAV using ClamSMTP virt_addr_valid is supposed to return true if and only if virt_to_page returns a valid page structure. The current macro does math on whatever address is given and passes that to pfn_valid to verify. vmalloc and module addresses can happen to generate a pfn that 'happens' to be valid. Fix this by only performing the pfn_valid check on addresses that have the potential to be valid. Signed-off-by: Laura Abbott Acked-by: Mark Rutland --- This caused a bug at least twice in hardened usercopy so it is an actual problem. A further TODO is full DEBUG_VIRTUAL support to catch these types of mistakes. --- arch/arm64/include/asm/memory.h | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-) diff --git a/arch/arm64/include/asm/memory.h b/arch/arm64/include/asm/memory.h index 31b7322..f741e19 100644 --- a/arch/arm64/include/asm/memory.h +++ b/arch/arm64/include/asm/memory.h @@ -214,7 +214,7 @@ static inline void *phys_to_virt(phys_addr_t x) #ifndef CONFIG_SPARSEMEM_VMEMMAP #define virt_to_page(kaddr) pfn_to_page(__pa(kaddr) >> PAGE_SHIFT) -#define virt_addr_valid(kaddr) pfn_valid(__pa(kaddr) >> PAGE_SHIFT) +#define virt_addr_valid(kaddr) (((u64)kaddr) >= PAGE_OFFSET && pfn_valid(__pa(kaddr) >> PAGE_SHIFT)) #else #define __virt_to_pgoff(kaddr) (((u64)(kaddr) & ~PAGE_OFFSET) / PAGE_SIZE * sizeof(struct page)) #define __page_to_voff(kaddr) (((u64)(page) & ~VMEMMAP_START) * PAGE_SIZE / sizeof(struct page)) @@ -222,8 +222,8 @@ static inline void *phys_to_virt(phys_addr_t x) #define page_to_virt(page) ((void *)((__page_to_voff(page)) | PAGE_OFFSET)) #define virt_to_page(vaddr) ((struct page *)((__virt_to_pgoff(vaddr)) | VMEMMAP_START)) -#define virt_addr_valid(kaddr) pfn_valid((((u64)(kaddr) & ~PAGE_OFFSET) \ - + PHYS_OFFSET) >> PAGE_SHIFT) +#define virt_addr_valid(kaddr) (((u64)kaddr) >= PAGE_OFFSET && pfn_valid((((u64)(kaddr) & ~PAGE_OFFSET) \ + + PHYS_OFFSET) >> PAGE_SHIFT)) #endif #endif