From patchwork Wed Sep 21 22:25:04 2016 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Laura Abbott X-Patchwork-Id: 9344311 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id 8B899601C2 for ; Wed, 21 Sep 2016 22:27:12 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 8E7CD29F53 for ; Wed, 21 Sep 2016 22:27:12 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 825DA2A2D9; Wed, 21 Sep 2016 22:27:12 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-3.7 required=2.0 tests=BAYES_00, RCVD_IN_DNSWL_MED, RCVD_IN_SORBS_SPAM autolearn=unavailable version=3.3.1 Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.9]) (using TLSv1.2 with cipher AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id 266CD29F53 for ; Wed, 21 Sep 2016 22:27:12 +0000 (UTC) Received: from localhost ([127.0.0.1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.85_2 #1 (Red Hat Linux)) id 1bmpxU-0007Ep-GA; Wed, 21 Sep 2016 22:25:40 +0000 Received: from mail-yw0-f173.google.com ([209.85.161.173]) by bombadil.infradead.org with esmtps (Exim 4.85_2 #1 (Red Hat Linux)) id 1bmpxQ-0007Ce-3X for linux-arm-kernel@lists.infradead.org; Wed, 21 Sep 2016 22:25:37 +0000 Received: by mail-yw0-f173.google.com with SMTP id i129so78025960ywb.0 for ; Wed, 21 Sep 2016 15:25:14 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:from:to:cc:subject:date:message-id; bh=/wSDlu/TkWTI7hNFhnH1bJMHsk5UpQYhieKAjFri44M=; b=T+/Tf1a3B43K7psYFVX7POppIb9bX8BsDNQhkpDnM7gy5SK/AaWX/g9IQ2iSeJxBLe JBf2FK7WbK/dxCFgkI/agHzXuKRkZUjqLR0oYZE2BRHKz0rs/BwHMM7nLUvbInJrGUma ixPOYXyszMwdXJ2HWGVz+keyhhNBoe7MYeW8qYKByKHqaNLf4rWFMifv/iN7FY88fXD3 /KQ5ZsyEb69fjUwtkeC5sRGfSwQcNz4O0MJBn8Z8D+MzwFl/wXD8Vu7IMPJWTpgHu30p ifuLE8Gvcl1t7PossU2ZkqLxcPGqc8dKQLiaJvUSR+1va8W1rYnUTnv2fwuAnv/9NYcL lH4g== X-Gm-Message-State: AE9vXwMoMoq1H6D0tU85adn6wCNQm/UIAtfHvZj5yRuBZ2cgWb015cVET68z4AZLLTnKFwi2 X-Received: by 10.13.227.68 with SMTP id m65mr38745457ywe.29.1474496713975; Wed, 21 Sep 2016 15:25:13 -0700 (PDT) Received: from labbott-redhat-machine.redhat.com ([2601:602:9800:177f::2946]) by smtp.gmail.com with ESMTPSA id s126sm14527513yws.13.2016.09.21.15.25.11 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Wed, 21 Sep 2016 15:25:12 -0700 (PDT) From: Laura Abbott To: Catalin Marinas , Will Deacon , Ard Biesheuvel , Mark Rutland Subject: [PATCHv2] arm64: Correctly bounds check virt_addr_valid Date: Wed, 21 Sep 2016 15:25:04 -0700 Message-Id: <1474496704-30541-1-git-send-email-labbott@redhat.com> X-Mailer: git-send-email 2.7.4 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20160921_152536_242853_B6CD7DA8 X-CRM114-Status: GOOD ( 11.21 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.20 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Laura Abbott , Kees Cook , linux-arm-kernel@lists.infradead.org, linux-kernel@vger.kernel.org MIME-Version: 1.0 Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org X-Virus-Scanned: ClamAV using ClamSMTP virt_addr_valid is supposed to return true if and only if virt_to_page returns a valid page structure. The current macro does math on whatever address is given and passes that to pfn_valid to verify. vmalloc and module addresses can happen to generate a pfn that 'happens' to be valid. Fix this by only performing the pfn_valid check on addresses that have the potential to be valid. Acked-by: Mark Rutland Signed-off-by: Laura Abbott --- v2: Properly parenthesize macro arguments. Re-factor to common macro. Also in case it wasn't clear, there's no need to try and squeeze this into 4.8. Hardened usercopy should have all the checks, this is just for full correctness. --- arch/arm64/include/asm/memory.h | 8 ++++++-- 1 file changed, 6 insertions(+), 2 deletions(-) diff --git a/arch/arm64/include/asm/memory.h b/arch/arm64/include/asm/memory.h index 31b7322..ba62df8 100644 --- a/arch/arm64/include/asm/memory.h +++ b/arch/arm64/include/asm/memory.h @@ -214,7 +214,7 @@ static inline void *phys_to_virt(phys_addr_t x) #ifndef CONFIG_SPARSEMEM_VMEMMAP #define virt_to_page(kaddr) pfn_to_page(__pa(kaddr) >> PAGE_SHIFT) -#define virt_addr_valid(kaddr) pfn_valid(__pa(kaddr) >> PAGE_SHIFT) +#define _virt_addr_valid(kaddr) pfn_valid(__pa(kaddr) >> PAGE_SHIFT) #else #define __virt_to_pgoff(kaddr) (((u64)(kaddr) & ~PAGE_OFFSET) / PAGE_SIZE * sizeof(struct page)) #define __page_to_voff(kaddr) (((u64)(page) & ~VMEMMAP_START) * PAGE_SIZE / sizeof(struct page)) @@ -222,11 +222,15 @@ static inline void *phys_to_virt(phys_addr_t x) #define page_to_virt(page) ((void *)((__page_to_voff(page)) | PAGE_OFFSET)) #define virt_to_page(vaddr) ((struct page *)((__virt_to_pgoff(vaddr)) | VMEMMAP_START)) -#define virt_addr_valid(kaddr) pfn_valid((((u64)(kaddr) & ~PAGE_OFFSET) \ +#define _virt_addr_valid(kaddr) pfn_valid((((u64)(kaddr) & ~PAGE_OFFSET) \ + PHYS_OFFSET) >> PAGE_SHIFT) #endif #endif +#define _virt_addr_is_linear(kaddr) (((u64)(kaddr)) >= PAGE_OFFSET) +#define virt_addr_valid(kaddr) (_virt_addr_is_linear(kaddr) && \ + _virt_addr_valid(kaddr)) + #include #endif