From patchwork Wed Jun 17 22:34:14 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Kaiyu Zhang X-Patchwork-Id: 11610891 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id C267592A for ; Wed, 17 Jun 2020 22:34:19 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 8F0D520B1F for ; Wed, 17 Jun 2020 22:34:19 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="XZbDoRPx" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 8F0D520B1F Authentication-Results: mail.kernel.org; dmarc=fail (p=reject dis=none) header.from=google.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id C9DC96B0006; Wed, 17 Jun 2020 18:34:18 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id C4E886B0007; Wed, 17 Jun 2020 18:34:18 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id B63646B0008; Wed, 17 Jun 2020 18:34:18 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0078.hostedemail.com [216.40.44.78]) by kanga.kvack.org (Postfix) with ESMTP id 9F2426B0006 for ; Wed, 17 Jun 2020 18:34:18 -0400 (EDT) Received: from smtpin26.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay05.hostedemail.com (Postfix) with ESMTP id 1B62C181AC9C6 for ; Wed, 17 Jun 2020 22:34:18 +0000 (UTC) X-FDA: 76940158596.26.self11_4c09faa26e0b Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin26.hostedemail.com (Postfix) with ESMTP id EE4441817A68A for ; Wed, 17 Jun 2020 22:34:17 +0000 (UTC) X-Spam-Summary: 2,0,0,6b40016a8d9014a2,d41d8cd98f00b204,3ajrqxgkkceu6ohunhsl4nvvnsl.jvtspu14-ttr2hjr.vyn@flex--zhangalex.bounces.google.com,,RULES_HIT:41:152:355:379:541:800:960:973:988:989:1260:1277:1313:1314:1345:1437:1516:1518:1534:1541:1593:1594:1711:1730:1747:1777:1792:2393:2553:2559:2562:2897:3138:3139:3140:3141:3142:3152:3352:3865:3866:3867:3870:3871:3872:3874:4321:5007:6261:6653:7576:7875:8603:9969:10004:10400:10481:11026:11658:11914:12296:12297:12555:12679:12895:12986:13069:13311:13357:14096:14097:14181:14394:14659:14721:19900:21080:21444:21451:21627:21795:21990:30003:30034:30051:30054:30070:30090,0,RBL:209.85.219.202:@flex--zhangalex.bounces.google.com:.lbl8.mailshell.net-62.18.0.100 66.100.201.100,CacheIP:none,Bayesian:0.5,0.5,0.5,Netcheck:none,DomainCache:0,MSF:not bulk,SPF:fp,MSBL:0,DNSBL:neutral,Custom_rules:0:0:0,LFtime:26,LUA_SUMMARY:none X-HE-Tag: self11_4c09faa26e0b X-Filterd-Recvd-Size: 3796 Received: from mail-yb1-f202.google.com (mail-yb1-f202.google.com [209.85.219.202]) by imf03.hostedemail.com (Postfix) with ESMTP for ; Wed, 17 Jun 2020 22:34:17 +0000 (UTC) Received: by mail-yb1-f202.google.com with SMTP id s90so4326267ybi.6 for ; Wed, 17 Jun 2020 15:34:17 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=date:message-id:mime-version:subject:from:to:cc; bh=dl8CpF2u3zjH2sdeNdfuapWfq9PffavQBlO7OcAgrZI=; b=XZbDoRPx9MaqRuyRsuE/Zz84rdx1Xg4cSOXtmTvC/TW75TfM/II++KXvkmOZ41Yqf6 k0aimUW4ZwWVH3K11wIO5XxCn5FW6ghYzMqX6KekiZf7JAT+9xphRLQFmGCeSf2oWv2B joYHJsCZPN4lBHirZGH3DZRwU5eUSMEaMqB7Kf68HK4NaGlSZXTbS0af4dhERrXKNZ0O C6ROehJ0qKo+jcU7S0430TM9qa1DqWbamBFaRIdN1LWNkSr55f52P4EJEO6/tu4jdhdh sY4C0ESIhkZi3FAp6qi/HhG/QDIzDfFSu+UyLm7nC8Yv/FRs/bez0w6Kd2cDoHu7F+zT 21zw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:message-id:mime-version:subject:from:to:cc; bh=dl8CpF2u3zjH2sdeNdfuapWfq9PffavQBlO7OcAgrZI=; b=UaUhqdseQ0q21UTRj1/y8v5SR5oWtyJZdwOc0BYAykUQybHKIpenuhSfnyymvhjiKg oSGcGn2pXW3V0GnsYEZqkh0foP0O0IQ3Oy5pwvtZr5b2fjPiaIIRpHt3S8AUtWfL0jA5 AdS8bUahW7b7B7e36Biap8sWFPn9WWIZC5zZj+SGF+IwWZEKqYRVn2rFm2nQ9rn3dN0r y7OIgRRUlqhYy+4w1ycPHAL0HMbI/+NW25tKXto5sPJyaXC5KKnDPFWRvbrPUG/TSk3q eNGPuquRvRCF73PDyKLmIht3qFvZYFF3jfYIFm+cFXwZpEp7eInlRVE3p1IUmICKM6ix raAw== X-Gm-Message-State: AOAM531oI0dnfYmoGYrrNzpv/v8sfPNEDI4pszwL/hHsnYTMRgE4C/1v w8GmVnGIUk1ZLZobINl6uUeYfYbA+XYXduw= X-Google-Smtp-Source: ABdhPJwSX0MoqylBaNHo5+aHvkEKKYVU9E5bWuZtt82sujJuI/YxK/IxNKn5uJvJu1RQHd461b7kt2QU2QPNQUY= X-Received: by 2002:a25:3784:: with SMTP id e126mr2000871yba.267.1592433256945; Wed, 17 Jun 2020 15:34:16 -0700 (PDT) Date: Wed, 17 Jun 2020 15:34:14 -0700 Message-Id: <20200617223414.165923-1-zhangalex@google.com> Mime-Version: 1.0 X-Mailer: git-send-email 2.27.0.111.gc72c7da667-goog Subject: [PATCH] mm/memory.c: make remap_pfn_range() reject unaligned addr From: Kaiyu Zhang To: Andrew Morton Cc: linux-mm@kvack.org, linux-kernel@vger.kernel.org, Alex Zhang X-Rspamd-Queue-Id: EE4441817A68A X-Spamd-Result: default: False [0.00 / 100.00] X-Rspamd-Server: rspam05 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: From: Alex Zhang This function implicitly assumes that the addr passed in is page aligned. A non page aligned addr could ultimately cause a kernel bug in remap_pte_range as the exit condition in the logic loop may never be satisfied. This patch documents the need for the requirement, as well as explicitly adding a check for it. Signed-off-by: Alex Zhang --- mm/memory.c | 5 ++++- 1 file changed, 4 insertions(+), 1 deletion(-) diff --git a/mm/memory.c b/mm/memory.c index dc7f3543b1fd..9cb0a75f1555 100644 --- a/mm/memory.c +++ b/mm/memory.c @@ -2081,7 +2081,7 @@ static inline int remap_p4d_range(struct mm_struct *mm, pgd_t *pgd, /** * remap_pfn_range - remap kernel memory to userspace * @vma: user vma to map to - * @addr: target user address to start at + * @addr: target page aligned user address to start at * @pfn: page frame number of kernel physical memory address * @size: size of mapping area * @prot: page protection flags for this mapping @@ -2100,6 +2100,9 @@ int remap_pfn_range(struct vm_area_struct *vma, unsigned long addr, unsigned long remap_pfn = pfn; int err; + if (!PAGE_ALIGN(addr)) + return -EINVAL; + /* * Physically remapped pages are special. Tell the * rest of the world about it: