From patchwork Tue Jun 25 14:37:00 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Christoph Hellwig X-Patchwork-Id: 11015659 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 154A3112C for ; Tue, 25 Jun 2019 14:37:53 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 0564E27F4B for ; Tue, 25 Jun 2019 14:37:53 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id EC432283EE; Tue, 25 Jun 2019 14:37:52 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-2.7 required=2.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,MAILING_LIST_MULTI,RCVD_IN_DNSWL_NONE autolearn=ham version=3.3.1 Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 78D9A27F4B for ; Tue, 25 Jun 2019 14:37:52 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 60BA68E0003; Tue, 25 Jun 2019 10:37:48 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 58F8A8E0002; Tue, 25 Jun 2019 10:37:48 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 3E7DE6B0007; Tue, 25 Jun 2019 10:37:48 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from mail-pl1-f197.google.com (mail-pl1-f197.google.com [209.85.214.197]) by kanga.kvack.org (Postfix) with ESMTP id E6D8D8E0002 for ; Tue, 25 Jun 2019 10:37:47 -0400 (EDT) Received: by mail-pl1-f197.google.com with SMTP id d2so9297528pla.18 for ; Tue, 25 Jun 2019 07:37:47 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:dkim-signature:from:to:cc:subject:date :message-id:in-reply-to:references:mime-version :content-transfer-encoding; bh=+0/viyEOzdJ4g+CLfsqZmoPceslPqx1tCvvAPI4dyzQ=; b=Sgvm7dT6zTo+7KiBykHjDYFZh2t0KSzxpfgMLNp30mE+oPx4SbR/WyOo81+n+KrOqn D8jiRJ+mS+tf7728ukwfwNE6yq9TXCtOXG9PmlDFF/mC9BKl/FuWvf62FefJdIZEuzob qL7M6k41znqdEQy3z7/A6oR+wpP3fCIRew0fYKz9yhcVRTE8z2pFH3uJDeBh9sVkqsv+ 7OiemXO6XNX/wddHMKAuk5HKuX1PKQwh284WiNM0+YTZ097xWsTRaonbis7dSlp/FexJ /S81afzDIU34yvYQXj8Fh9ymUtwp9OJvCypWWOb6hp7bWFGx+eSaoqR7NeNdjozI/26x oGYQ== X-Gm-Message-State: APjAAAXLJV3gOy9SuI2/m8DC3D1y4sQSGY1gLRUAXCy8GoDMmVbwngJA CsJWyUvV6aE/Iwxxp44R26ziG6NRaEnTOWNUzy2K24z0vBxvT0gnnfRfoN/XgbIlCCo3KIDqsG8 vC75kTs2mQ9GBW5DLGabffDHIKCTLaHdSDzbz88Y0yTVHFieO8FP3INXlRPmOGm0= X-Received: by 2002:a65:5c88:: with SMTP id a8mr9738684pgt.388.1561473467448; Tue, 25 Jun 2019 07:37:47 -0700 (PDT) X-Google-Smtp-Source: APXvYqw3epuuMYCLA2sSnDYsM0h+l6D0ops3w1xrx2b9HthgWz3PsO3gjEPCmuaIVVuXzmOMbmbG X-Received: by 2002:a65:5c88:: with SMTP id a8mr9738612pgt.388.1561473466563; Tue, 25 Jun 2019 07:37:46 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1561473466; cv=none; d=google.com; s=arc-20160816; b=MGk9Ii9YS4BJ3VciBWY1CdfByE2AI1OSnBXkFc7GhvKZ9Si0lc2fHp0aw/47eKF6ez 0mPgjkgpAN8ArDxLGfIMAIiO6cA/HLCR65nkiJpJLGbLYia48u01b5vOMMCyfn8mbzQ+ 6spAl4/7OEiWUjPchg6N1l7Kx1A4EM4xK/RPdMRDSntebRo00DJzhp/ECFkLsJy7Xt7A 6bB0Ohf4uPEVU2bYkCvUK85YYARYMp8ySQhVvNLVEKK0wZPafV8HueCARcYR8UfMNRF7 TcFzcOklTLyfNQAQHmPBZAPVzs0rJPWtiuGLUMhWsynKh0G1gk0ZdhO68zk4WnynBs6y /lcw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:dkim-signature; bh=+0/viyEOzdJ4g+CLfsqZmoPceslPqx1tCvvAPI4dyzQ=; b=QU5v9Ursf0oZ+xx0iO4JcPC51z74cZ7gMVMrGXVUnzEPQ0zFlfVEDooKgogSi4vn5T Fe3Deorz8RGXFrquNwCk94xhOJYGoq9cwFK7XSelNhxACsAkRIlaXFCiHVmw8nsHgWl1 Z06by9sNBmmaO6nQPe2cwe/s83feC3vAYJaVJ8pEf06TwJxZfIZ6STr3zL9t5L5SqjiJ 0bAHodB54NLvyDJcBsXZQSdGUSuXgPTw+4HXt/+CN/NqhDu/urVAqx/3qKhEu282dK3b MgWhBuYDKUXQ2GIkhCxODVt9BwRzcrulxg2u2m5/Wj7W+afLl2S8rhGYurPke1248U5S y/jA== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@infradead.org header.s=bombadil.20170209 header.b=J5if1v3z; spf=pass (google.com: best guess record for domain of batv+c5155a46dc30cc8634d8+5784+infradead.org+hch@bombadil.srs.infradead.org designates 2607:7c80:54:e::133 as permitted sender) smtp.mailfrom=BATV+c5155a46dc30cc8634d8+5784+infradead.org+hch@bombadil.srs.infradead.org Received: from bombadil.infradead.org (bombadil.infradead.org. [2607:7c80:54:e::133]) by mx.google.com with ESMTPS id t130si9086449pgc.237.2019.06.25.07.37.46 for (version=TLS1_3 cipher=AEAD-AES256-GCM-SHA384 bits=256/256); Tue, 25 Jun 2019 07:37:46 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of batv+c5155a46dc30cc8634d8+5784+infradead.org+hch@bombadil.srs.infradead.org designates 2607:7c80:54:e::133 as permitted sender) client-ip=2607:7c80:54:e::133; Authentication-Results: mx.google.com; dkim=pass header.i=@infradead.org header.s=bombadil.20170209 header.b=J5if1v3z; spf=pass (google.com: best guess record for domain of batv+c5155a46dc30cc8634d8+5784+infradead.org+hch@bombadil.srs.infradead.org designates 2607:7c80:54:e::133 as permitted sender) smtp.mailfrom=BATV+c5155a46dc30cc8634d8+5784+infradead.org+hch@bombadil.srs.infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=bombadil.20170209; h=Content-Transfer-Encoding: MIME-Version:References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender :Reply-To:Content-Type:Content-ID:Content-Description:Resent-Date:Resent-From :Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Id:List-Help: List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive; bh=+0/viyEOzdJ4g+CLfsqZmoPceslPqx1tCvvAPI4dyzQ=; b=J5if1v3zcVM4I6x+2c5HtP4Bna 8Tefof1zgp+1eNhi7Lj/fWmEE2wW3rI8O4UbbH9ZXb6HT9QkCIwWaDfQ8VIjokPGOMCr4mTU7VKC3 BDJ9kTuu2kMvLojAus5QH/0IZbOaX5UCvmyn6bQTIZDkRfCnABq3kui54EXCrUdecIPX1UjtuoJvk Jb00yuChRKvnfiSIvznda80OjYZGtraH1c+tG4xnPe3aJsyDJn7b+bxmvddcgwCNaZFotzFXb39bC IB9Dsngi53dguMdFaTUTcWZoQBNLbeV9ShycGEu2VkmE3PeGt9vtNUcVnJ2h/JtVajilfZH+flpW0 eKQT/WWg==; Received: from 213-225-6-159.nat.highway.a1.net ([213.225.6.159] helo=localhost) by bombadil.infradead.org with esmtpsa (Exim 4.92 #3 (Red Hat Linux)) id 1hfmZU-0007xc-Qw; Tue, 25 Jun 2019 14:37:21 +0000 From: Christoph Hellwig To: Andrew Morton , Linus Torvalds , Paul Burton , James Hogan , Yoshinori Sato , Rich Felker , "David S. Miller" Cc: Nicholas Piggin , Khalid Aziz , Andrey Konovalov , Benjamin Herrenschmidt , Paul Mackerras , Michael Ellerman , linux-mips@vger.kernel.org, linux-sh@vger.kernel.org, sparclinux@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, linux-mm@kvack.org, x86@kernel.org, linux-kernel@vger.kernel.org, Jason Gunthorpe Subject: [PATCH 01/16] mm: use untagged_addr() for get_user_pages_fast addresses Date: Tue, 25 Jun 2019 16:37:00 +0200 Message-Id: <20190625143715.1689-2-hch@lst.de> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20190625143715.1689-1-hch@lst.de> References: <20190625143715.1689-1-hch@lst.de> MIME-Version: 1.0 X-SRS-Rewrite: SMTP reverse-path rewritten from by bombadil.infradead.org. See http://www.infradead.org/rpr.html X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: X-Virus-Scanned: ClamAV using ClamSMTP This will allow sparc64, or any future architecture with memory tagging to override its tags for get_user_pages and get_user_pages_fast. Signed-off-by: Christoph Hellwig Reviewed-by: Khalid Aziz Reviewed-by: Jason Gunthorpe --- mm/gup.c | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/mm/gup.c b/mm/gup.c index ddde097cf9e4..6bb521db67ec 100644 --- a/mm/gup.c +++ b/mm/gup.c @@ -2146,7 +2146,7 @@ int __get_user_pages_fast(unsigned long start, int nr_pages, int write, unsigned long flags; int nr = 0; - start &= PAGE_MASK; + start = untagged_addr(start) & PAGE_MASK; len = (unsigned long) nr_pages << PAGE_SHIFT; end = start + len; @@ -2219,7 +2219,7 @@ int get_user_pages_fast(unsigned long start, int nr_pages, unsigned long addr, len, end; int nr = 0, ret = 0; - start &= PAGE_MASK; + start = untagged_addr(start) & PAGE_MASK; addr = start; len = (unsigned long) nr_pages << PAGE_SHIFT; end = start + len; From patchwork Tue Jun 25 14:37:01 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Christoph Hellwig X-Patchwork-Id: 11015669 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 63F321575 for ; Tue, 25 Jun 2019 14:38:00 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 5571827F4B for ; Tue, 25 Jun 2019 14:38:00 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 49D5827F97; Tue, 25 Jun 2019 14:38:00 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-2.7 required=2.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,MAILING_LIST_MULTI,RCVD_IN_DNSWL_NONE autolearn=ham version=3.3.1 Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 3A07E283EE for ; Tue, 25 Jun 2019 14:37:58 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 370678E0005; Tue, 25 Jun 2019 10:37:52 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 2F8628E0002; Tue, 25 Jun 2019 10:37:52 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 1C13D8E0005; Tue, 25 Jun 2019 10:37:52 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from mail-pg1-f198.google.com (mail-pg1-f198.google.com [209.85.215.198]) by kanga.kvack.org (Postfix) with ESMTP id DCA498E0002 for ; Tue, 25 Jun 2019 10:37:51 -0400 (EDT) Received: by mail-pg1-f198.google.com with SMTP id x13so7831716pgk.23 for ; Tue, 25 Jun 2019 07:37:51 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:dkim-signature:from:to:cc:subject:date :message-id:in-reply-to:references:mime-version :content-transfer-encoding; bh=rMutb/mdXZWT7NThFT68hJrDj+EN7r08hrqkExGM1N0=; b=K3IABX7zRBp0iG/YOaMWrDQ9d9VZVJvaI2MghonIRDes988CSXAPnkQ8/QZpOA3PRC jII0xk6Kh95zmNkC5Z1em1OB23KfCedcbfQOPSjlHQEvPutNDn+0Li9sFaYK8X1IDSVm BZD726XgNNvnW2EgCldNxXX1tmTbkPQElCGlkiIcu6N+H5zFkfM2wAM+MgUcGzSZXXPw Bcwsh0GXbcU+HIhxQLNFeF2HAKN4jFBY/aV6X4tYRw1clvVTQtifv+kdefVZRPNk94dA VegepxuDBZhJp6xzce3z+mVfRrlciFjvzfJEWXgEGYQc8D3B8QX7NF4H+SIOg8yyiuoi R51A== X-Gm-Message-State: APjAAAW8NM6o0Eo2uoDO4xFfIzyhNNeVTNZ2EAjCmeU4ZUhUnFLNGms3 GDjSd3EQrlIv+pt2EoPkWQb0e+Y+lc/MO8Fzm6ugBK/+9ejiARBcwhfZltLY2NCIDUaVF8wt2KJ CyS28nMdRzKjOV6zhIJPQcj0pZoD2g6z/e70Rr51kjl8W50brsdIUexJgANcQl3c= X-Received: by 2002:a17:902:290b:: with SMTP id g11mr152332211plb.26.1561473471526; Tue, 25 Jun 2019 07:37:51 -0700 (PDT) X-Google-Smtp-Source: APXvYqw8SyTyAlFHq8q1P/mxFv9eJTjXVopxgspckYsAshsLAZLPAOyAyXPtGJwJhqz2jOO/FB/B X-Received: by 2002:a17:902:290b:: with SMTP id g11mr152332133plb.26.1561473470607; Tue, 25 Jun 2019 07:37:50 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1561473470; cv=none; d=google.com; s=arc-20160816; b=E3fNEyT7vz0v28xvab9ADIatG2+mIdpRgV3Ycsd0RVkprklYizATyYmOL2fPFoq+t3 lHfun9mziDtn3svEUIBCsjfDFVBf2/6MyX03nC7ibOAkG8WrpOXvG/YuHRjRcuyNKZIw s6uLeFKCQq5BUBMKwrUueWVGvAJ4WZFUKz4yp30llja0micLb2R90hBPmUYlfupTu+uj Tc3RT+NpBMdHJ/+C27M69BnGeR2yL++qZjPhaKsZmLheN5GqyGdr1f7sdlLF5xoBWbo4 2mlrQk6vMXSSAycA91jybCO/yhULWPIhQbOWC0HwxxHO+tYXrIqfb+2J74gjr7qSsk2P ZRSg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:dkim-signature; bh=rMutb/mdXZWT7NThFT68hJrDj+EN7r08hrqkExGM1N0=; b=iD18gcTF8+TMFrtkZTkWVBu4FkxUk+oQaEU9wQs8/1rVrzZvKAksL1ip5Vp+UdUY8d bPWPY0XIhhhK71w4AtPk4XrzYMfc7RtwEa90El/hTm+b8Se+zfGe2hGIGwAl2PwSKl6e cRxKhGstxXcW09KwOUn/zGuZ2KrHJwWEam8QwrTKkcVvPsQvadgp0ZGuXrXg0PKFcCcp tLG81hCI9M81GPRpNYqg6eHHWRs3s/Q+Oub79NH6kTNqnmI86Apsl4hdDMQdVPjy9+0z GJA9jkKhQavFdHz4k81Vxl0TldauEY98/TNZ7IAD4dONsN4u+Fh5z1BXdfeE9tYDKOBu IkRg== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@infradead.org header.s=bombadil.20170209 header.b=nD3PJ12M; spf=pass (google.com: best guess record for domain of batv+c5155a46dc30cc8634d8+5784+infradead.org+hch@bombadil.srs.infradead.org designates 2607:7c80:54:e::133 as permitted sender) smtp.mailfrom=BATV+c5155a46dc30cc8634d8+5784+infradead.org+hch@bombadil.srs.infradead.org Received: from bombadil.infradead.org (bombadil.infradead.org. [2607:7c80:54:e::133]) by mx.google.com with ESMTPS id j25si14050497pfr.11.2019.06.25.07.37.50 for (version=TLS1_3 cipher=AEAD-AES256-GCM-SHA384 bits=256/256); Tue, 25 Jun 2019 07:37:50 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of batv+c5155a46dc30cc8634d8+5784+infradead.org+hch@bombadil.srs.infradead.org designates 2607:7c80:54:e::133 as permitted sender) client-ip=2607:7c80:54:e::133; Authentication-Results: mx.google.com; dkim=pass header.i=@infradead.org header.s=bombadil.20170209 header.b=nD3PJ12M; spf=pass (google.com: best guess record for domain of batv+c5155a46dc30cc8634d8+5784+infradead.org+hch@bombadil.srs.infradead.org designates 2607:7c80:54:e::133 as permitted sender) smtp.mailfrom=BATV+c5155a46dc30cc8634d8+5784+infradead.org+hch@bombadil.srs.infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=bombadil.20170209; h=Content-Transfer-Encoding: MIME-Version:References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender :Reply-To:Content-Type:Content-ID:Content-Description:Resent-Date:Resent-From :Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Id:List-Help: List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive; bh=rMutb/mdXZWT7NThFT68hJrDj+EN7r08hrqkExGM1N0=; b=nD3PJ12M/+ZRf8iUUjpwR2BcYx 672S8ju0tCof1gf2fqrCbrsCpu7NFBMKaLTMwLkXOaFiKyFut/g5RXRJ/cOxHnubPbiLSN5aV8tJy JavcS+cHBWZD76nRh6lZuQ/mQIPinDpLY5TMCmr49gDGHFbuHmJdb00rTzy1mFwlsZIsSgl9kT5Fj kkd5icJsltmuqyAF7k3QI/zT6tGFmaPcaUX17VAB09De8SAUkTKOfv2zQx+qdDT4CeXWd6UHrZjo8 A++RYiFjkYlhD6cfHebQjHq6sW2lxvpcONIU3/+aXFXEukG9qDfInJQXNXV9C+WwblaXsZ2jisVo+ /tYjAgzA==; Received: from 213-225-6-159.nat.highway.a1.net ([213.225.6.159] helo=localhost) by bombadil.infradead.org with esmtpsa (Exim 4.92 #3 (Red Hat Linux)) id 1hfmZY-0007y2-7Z; Tue, 25 Jun 2019 14:37:24 +0000 From: Christoph Hellwig To: Andrew Morton , Linus Torvalds , Paul Burton , James Hogan , Yoshinori Sato , Rich Felker , "David S. Miller" Cc: Nicholas Piggin , Khalid Aziz , Andrey Konovalov , Benjamin Herrenschmidt , Paul Mackerras , Michael Ellerman , linux-mips@vger.kernel.org, linux-sh@vger.kernel.org, sparclinux@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, linux-mm@kvack.org, x86@kernel.org, linux-kernel@vger.kernel.org, Jason Gunthorpe Subject: [PATCH 02/16] mm: simplify gup_fast_permitted Date: Tue, 25 Jun 2019 16:37:01 +0200 Message-Id: <20190625143715.1689-3-hch@lst.de> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20190625143715.1689-1-hch@lst.de> References: <20190625143715.1689-1-hch@lst.de> MIME-Version: 1.0 X-SRS-Rewrite: SMTP reverse-path rewritten from by bombadil.infradead.org. See http://www.infradead.org/rpr.html X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: X-Virus-Scanned: ClamAV using ClamSMTP Pass in the already calculated end value instead of recomputing it, and leave the end > start check in the callers instead of duplicating them in the arch code. Signed-off-by: Christoph Hellwig Reviewed-by: Jason Gunthorpe --- arch/s390/include/asm/pgtable.h | 8 +------- arch/x86/include/asm/pgtable_64.h | 8 +------- mm/gup.c | 17 +++++++---------- 3 files changed, 9 insertions(+), 24 deletions(-) diff --git a/arch/s390/include/asm/pgtable.h b/arch/s390/include/asm/pgtable.h index 9f0195d5fa16..9b274fcaacb6 100644 --- a/arch/s390/include/asm/pgtable.h +++ b/arch/s390/include/asm/pgtable.h @@ -1270,14 +1270,8 @@ static inline pte_t *pte_offset(pmd_t *pmd, unsigned long address) #define pte_offset_map(pmd, address) pte_offset_kernel(pmd, address) #define pte_unmap(pte) do { } while (0) -static inline bool gup_fast_permitted(unsigned long start, int nr_pages) +static inline bool gup_fast_permitted(unsigned long start, unsigned long end) { - unsigned long len, end; - - len = (unsigned long) nr_pages << PAGE_SHIFT; - end = start + len; - if (end < start) - return false; return end <= current->mm->context.asce_limit; } #define gup_fast_permitted gup_fast_permitted diff --git a/arch/x86/include/asm/pgtable_64.h b/arch/x86/include/asm/pgtable_64.h index 0bb566315621..4990d26dfc73 100644 --- a/arch/x86/include/asm/pgtable_64.h +++ b/arch/x86/include/asm/pgtable_64.h @@ -259,14 +259,8 @@ extern void init_extra_mapping_uc(unsigned long phys, unsigned long size); extern void init_extra_mapping_wb(unsigned long phys, unsigned long size); #define gup_fast_permitted gup_fast_permitted -static inline bool gup_fast_permitted(unsigned long start, int nr_pages) +static inline bool gup_fast_permitted(unsigned long start, unsigned long end) { - unsigned long len, end; - - len = (unsigned long)nr_pages << PAGE_SHIFT; - end = start + len; - if (end < start) - return false; if (end >> __VIRTUAL_MASK_SHIFT) return false; return true; diff --git a/mm/gup.c b/mm/gup.c index 6bb521db67ec..3237f33792e6 100644 --- a/mm/gup.c +++ b/mm/gup.c @@ -2123,13 +2123,9 @@ static void gup_pgd_range(unsigned long addr, unsigned long end, * Check if it's allowed to use __get_user_pages_fast() for the range, or * we need to fall back to the slow version: */ -bool gup_fast_permitted(unsigned long start, int nr_pages) +static bool gup_fast_permitted(unsigned long start, unsigned long end) { - unsigned long len, end; - - len = (unsigned long) nr_pages << PAGE_SHIFT; - end = start + len; - return end >= start; + return true; } #endif @@ -2150,6 +2146,8 @@ int __get_user_pages_fast(unsigned long start, int nr_pages, int write, len = (unsigned long) nr_pages << PAGE_SHIFT; end = start + len; + if (end <= start) + return 0; if (unlikely(!access_ok((void __user *)start, len))) return 0; @@ -2165,7 +2163,7 @@ int __get_user_pages_fast(unsigned long start, int nr_pages, int write, * block IPIs that come from THPs splitting. */ - if (gup_fast_permitted(start, nr_pages)) { + if (gup_fast_permitted(start, end)) { local_irq_save(flags); gup_pgd_range(start, end, write ? FOLL_WRITE : 0, pages, &nr); local_irq_restore(flags); @@ -2224,13 +2222,12 @@ int get_user_pages_fast(unsigned long start, int nr_pages, len = (unsigned long) nr_pages << PAGE_SHIFT; end = start + len; - if (nr_pages <= 0) + if (end <= start) return 0; - if (unlikely(!access_ok((void __user *)start, len))) return -EFAULT; - if (gup_fast_permitted(start, nr_pages)) { + if (gup_fast_permitted(start, end)) { local_irq_disable(); gup_pgd_range(addr, end, gup_flags, pages, &nr); local_irq_enable(); From patchwork Tue Jun 25 14:37:02 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Christoph Hellwig X-Patchwork-Id: 11015663 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 25B701398 for ; Tue, 25 Jun 2019 14:37:56 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 16BD027F4B for ; Tue, 25 Jun 2019 14:37:56 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 0AD37283EE; Tue, 25 Jun 2019 14:37:56 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-2.7 required=2.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,MAILING_LIST_MULTI,RCVD_IN_DNSWL_NONE autolearn=unavailable version=3.3.1 Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 55E5427F4B for ; Tue, 25 Jun 2019 14:37:55 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 94A776B0005; Tue, 25 Jun 2019 10:37:48 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 717608E0005; Tue, 25 Jun 2019 10:37:48 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 605276B0005; Tue, 25 Jun 2019 10:37:48 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from mail-pf1-f198.google.com (mail-pf1-f198.google.com [209.85.210.198]) by kanga.kvack.org (Postfix) with ESMTP id 1C5DB8E0003 for ; Tue, 25 Jun 2019 10:37:48 -0400 (EDT) Received: by mail-pf1-f198.google.com with SMTP id y7so11997735pfy.9 for ; Tue, 25 Jun 2019 07:37:48 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:dkim-signature:from:to:cc:subject:date :message-id:in-reply-to:references:mime-version :content-transfer-encoding; bh=QyABUaahMXDucs76DrKe6gk3GH371BfoHXoRIxsRdtE=; b=Fu+ULGSS7DTns6hN5F0Yd1g6YXBvs2ApjODfQSvnSabuq5puVHF9HXKNmz27m+4VxR IoCGdLbshqXnemW9HCs5bqKSl3x7l5HSlUnOwdW8Zqpy8ORZw0iOJTZQN0+pM7CRPegx UM7X3CayLCrinirGANxECxx9U2cT3tXmzcELTWIuCDvhBJ6dohKIRQ5g6dO4a1k0MlGJ pEs61rvVz/wAc7q+5nNiG1OPF+5wkeLSETpD2xkqubOiC2aQDnh+42EyV905OmlxRX5X n+jrUpRm1+KW8j6z58fzStSDR9nf8bGm6LX4Iu0Z/656BRNFoTZjO5zuKaLLbhQxaRKQ daIg== X-Gm-Message-State: APjAAAVG/uCAxvtDoUq0Oie1U+CsOncreGXZu9+B1uW4Xe71SB+cDw3N KKQuEUr38R8k/HfEumMyv+Hmkqn4X8Zr4MbaCLfkK2UWMUl5suyL+HLVJLRM2x0n2daIx5IvUgT CvYyl1o/dgsbd3FIlwHwgAhch7J/itvTs7xwRQ9UTDLyumOX2KAqJwKCFFTmc1yE= X-Received: by 2002:a17:90a:5d0a:: with SMTP id s10mr32147594pji.94.1561473467687; Tue, 25 Jun 2019 07:37:47 -0700 (PDT) X-Google-Smtp-Source: APXvYqwkz6cteDawP+F2u+xJA51Rw885qRho5ZVXphuzx7x/oxDZZVAZx6IbpDc3PZxv7aKpl7d7 X-Received: by 2002:a17:90a:5d0a:: with SMTP id s10mr32147476pji.94.1561473466588; Tue, 25 Jun 2019 07:37:46 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1561473466; cv=none; d=google.com; s=arc-20160816; b=zl7gKDorI1xWCgdGzfybNibH/dI7nJ8ODs0ciWTGV0fGKjni4fzuoCVEQSS832Olt2 gDex6V5cpjBlnZQZelZop/QzEgH2UQFN+wHDCMaLDqCe+nlkKPSlvgI4GS10fDWmyTHe 0IUO3eIBnSiwLoHZgv+W0gV9+WBCJ6ZaiZMTr6EQvytleCATRDHgGxWzJcWvQ2Bll6Mh x1TBiF9K6RRp9DQjUkfglbHvNZBk74CWou6WgMHm6Xn9KH2Y30VJWB9AP8t8/zgVSQG8 9gCLkyerc8m9iE08n+qiqxKxuVPf9ibJ5ys3QipGqye3QxCJRf6GCTOYSIC3UFebftLR izwQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:dkim-signature; bh=QyABUaahMXDucs76DrKe6gk3GH371BfoHXoRIxsRdtE=; b=wc8A5SI5OclEQZrsVIFnL2aJijH9dNS8S0gUZr9pzdmtwJwWy1Fi4uoZt/2e8L0wvr 17a+WysOfrVViVVGFIl9TATd4JfUa8wiAF0kgFaqfw5jr90QRftyF9+JgBUz9WFZZ+UL XhW4tu3GMZEgpkpWNXAmliL7+s0U+lEZh+OFsg2M/RNFmAHLG+mB1j+Wxqk0IEosvZFn pwbiKkHIfyTSyC9QKWIr+sz6Q+S7IpLH00xjEQWV2aaOmhGRxMBePZWKEOcyOE5ZGHCs wNKW+H249JD/JwId8MZwrWmtFngV0+dvlZ7j7uYRKgTnnk4npiAdQ/li5oOZVe3QKQlt fdNw== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@infradead.org header.s=bombadil.20170209 header.b="AAx87/4y"; spf=pass (google.com: best guess record for domain of batv+c5155a46dc30cc8634d8+5784+infradead.org+hch@bombadil.srs.infradead.org designates 2607:7c80:54:e::133 as permitted sender) smtp.mailfrom=BATV+c5155a46dc30cc8634d8+5784+infradead.org+hch@bombadil.srs.infradead.org Received: from bombadil.infradead.org (bombadil.infradead.org. [2607:7c80:54:e::133]) by mx.google.com with ESMTPS id c139si805482pfb.140.2019.06.25.07.37.46 for (version=TLS1_3 cipher=AEAD-AES256-GCM-SHA384 bits=256/256); Tue, 25 Jun 2019 07:37:46 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of batv+c5155a46dc30cc8634d8+5784+infradead.org+hch@bombadil.srs.infradead.org designates 2607:7c80:54:e::133 as permitted sender) client-ip=2607:7c80:54:e::133; Authentication-Results: mx.google.com; dkim=pass header.i=@infradead.org header.s=bombadil.20170209 header.b="AAx87/4y"; spf=pass (google.com: best guess record for domain of batv+c5155a46dc30cc8634d8+5784+infradead.org+hch@bombadil.srs.infradead.org designates 2607:7c80:54:e::133 as permitted sender) smtp.mailfrom=BATV+c5155a46dc30cc8634d8+5784+infradead.org+hch@bombadil.srs.infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=bombadil.20170209; h=Content-Transfer-Encoding: MIME-Version:References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender :Reply-To:Content-Type:Content-ID:Content-Description:Resent-Date:Resent-From :Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Id:List-Help: List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive; bh=QyABUaahMXDucs76DrKe6gk3GH371BfoHXoRIxsRdtE=; b=AAx87/4yE16TLsgzBCexdNloqw SQQ6qt7hqNIhycsh55YfUmQTHyMCyPi2/hko55Yyb/MqJ6mZU658VEOWrTR4SEardCSyw2OyaS7Oy tBVovYa7OkK9CHSAYiGX8ezfbBDSTCbse7HWwKjnqN+Arqb4YRKFOCTmwpUJlDYVsWiEY8ozxYcds 6bCAb83LXOmgvX640XkeJiJ/V4bWtzNckqR7avhkK7JDjCrTScU4sdqO0TnUFEd3vBe3Jf+istmyu 9W5pIRypWNJcfNvcS5ZYMh3KwXg4/RiOBrL8RdpyMIwFY5CGgN6likzAe4jvbBDb7aMG+8TpH4DpH CuGb0Wqg==; Received: from 213-225-6-159.nat.highway.a1.net ([213.225.6.159] helo=localhost) by bombadil.infradead.org with esmtpsa (Exim 4.92 #3 (Red Hat Linux)) id 1hfmZb-0007yK-FI; Tue, 25 Jun 2019 14:37:28 +0000 From: Christoph Hellwig To: Andrew Morton , Linus Torvalds , Paul Burton , James Hogan , Yoshinori Sato , Rich Felker , "David S. Miller" Cc: Nicholas Piggin , Khalid Aziz , Andrey Konovalov , Benjamin Herrenschmidt , Paul Mackerras , Michael Ellerman , linux-mips@vger.kernel.org, linux-sh@vger.kernel.org, sparclinux@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, linux-mm@kvack.org, x86@kernel.org, linux-kernel@vger.kernel.org, Jason Gunthorpe Subject: [PATCH 03/16] mm: lift the x86_32 PAE version of gup_get_pte to common code Date: Tue, 25 Jun 2019 16:37:02 +0200 Message-Id: <20190625143715.1689-4-hch@lst.de> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20190625143715.1689-1-hch@lst.de> References: <20190625143715.1689-1-hch@lst.de> MIME-Version: 1.0 X-SRS-Rewrite: SMTP reverse-path rewritten from by bombadil.infradead.org. See http://www.infradead.org/rpr.html X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: X-Virus-Scanned: ClamAV using ClamSMTP The split low/high access is the only non-READ_ONCE version of gup_get_pte that did show up in the various arch implemenations. Lift it to common code and drop the ifdef based arch override. Signed-off-by: Christoph Hellwig Reviewed-by: Jason Gunthorpe --- arch/x86/Kconfig | 1 + arch/x86/include/asm/pgtable-3level.h | 47 ------------------------ arch/x86/kvm/mmu.c | 2 +- mm/Kconfig | 3 ++ mm/gup.c | 51 ++++++++++++++++++++++++--- 5 files changed, 52 insertions(+), 52 deletions(-) diff --git a/arch/x86/Kconfig b/arch/x86/Kconfig index 2bbbd4d1ba31..7cd53cc59f0f 100644 --- a/arch/x86/Kconfig +++ b/arch/x86/Kconfig @@ -121,6 +121,7 @@ config X86 select GENERIC_STRNCPY_FROM_USER select GENERIC_STRNLEN_USER select GENERIC_TIME_VSYSCALL + select GUP_GET_PTE_LOW_HIGH if X86_PAE select HARDLOCKUP_CHECK_TIMESTAMP if X86_64 select HAVE_ACPI_APEI if ACPI select HAVE_ACPI_APEI_NMI if ACPI diff --git a/arch/x86/include/asm/pgtable-3level.h b/arch/x86/include/asm/pgtable-3level.h index f8b1ad2c3828..e3633795fb22 100644 --- a/arch/x86/include/asm/pgtable-3level.h +++ b/arch/x86/include/asm/pgtable-3level.h @@ -285,53 +285,6 @@ static inline pud_t native_pudp_get_and_clear(pud_t *pudp) #define __pte_to_swp_entry(pte) (__swp_entry(__pteval_swp_type(pte), \ __pteval_swp_offset(pte))) -#define gup_get_pte gup_get_pte -/* - * WARNING: only to be used in the get_user_pages_fast() implementation. - * - * With get_user_pages_fast(), we walk down the pagetables without taking - * any locks. For this we would like to load the pointers atomically, - * but that is not possible (without expensive cmpxchg8b) on PAE. What - * we do have is the guarantee that a PTE will only either go from not - * present to present, or present to not present or both -- it will not - * switch to a completely different present page without a TLB flush in - * between; something that we are blocking by holding interrupts off. - * - * Setting ptes from not present to present goes: - * - * ptep->pte_high = h; - * smp_wmb(); - * ptep->pte_low = l; - * - * And present to not present goes: - * - * ptep->pte_low = 0; - * smp_wmb(); - * ptep->pte_high = 0; - * - * We must ensure here that the load of pte_low sees 'l' iff pte_high - * sees 'h'. We load pte_high *after* loading pte_low, which ensures we - * don't see an older value of pte_high. *Then* we recheck pte_low, - * which ensures that we haven't picked up a changed pte high. We might - * have gotten rubbish values from pte_low and pte_high, but we are - * guaranteed that pte_low will not have the present bit set *unless* - * it is 'l'. Because get_user_pages_fast() only operates on present ptes - * we're safe. - */ -static inline pte_t gup_get_pte(pte_t *ptep) -{ - pte_t pte; - - do { - pte.pte_low = ptep->pte_low; - smp_rmb(); - pte.pte_high = ptep->pte_high; - smp_rmb(); - } while (unlikely(pte.pte_low != ptep->pte_low)); - - return pte; -} - #include #endif /* _ASM_X86_PGTABLE_3LEVEL_H */ diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c index 98f6e4f88b04..4a9c63d1c20a 100644 --- a/arch/x86/kvm/mmu.c +++ b/arch/x86/kvm/mmu.c @@ -650,7 +650,7 @@ static u64 __update_clear_spte_slow(u64 *sptep, u64 spte) /* * The idea using the light way get the spte on x86_32 guest is from - * gup_get_pte(arch/x86/mm/gup.c). + * gup_get_pte (mm/gup.c). * * An spte tlb flush may be pending, because kvm_set_pte_rmapp * coalesces them and we are running out of the MMU lock. Therefore diff --git a/mm/Kconfig b/mm/Kconfig index f0c76ba47695..fe51f104a9e0 100644 --- a/mm/Kconfig +++ b/mm/Kconfig @@ -762,6 +762,9 @@ config GUP_BENCHMARK See tools/testing/selftests/vm/gup_benchmark.c +config GUP_GET_PTE_LOW_HIGH + bool + config ARCH_HAS_PTE_SPECIAL bool diff --git a/mm/gup.c b/mm/gup.c index 3237f33792e6..9b72f2ea3471 100644 --- a/mm/gup.c +++ b/mm/gup.c @@ -1684,17 +1684,60 @@ struct page *get_dump_page(unsigned long addr) * This code is based heavily on the PowerPC implementation by Nick Piggin. */ #ifdef CONFIG_HAVE_GENERIC_GUP +#ifdef CONFIG_GUP_GET_PTE_LOW_HIGH +/* + * WARNING: only to be used in the get_user_pages_fast() implementation. + * + * With get_user_pages_fast(), we walk down the pagetables without taking any + * locks. For this we would like to load the pointers atomically, but sometimes + * that is not possible (e.g. without expensive cmpxchg8b on x86_32 PAE). What + * we do have is the guarantee that a PTE will only either go from not present + * to present, or present to not present or both -- it will not switch to a + * completely different present page without a TLB flush in between; something + * that we are blocking by holding interrupts off. + * + * Setting ptes from not present to present goes: + * + * ptep->pte_high = h; + * smp_wmb(); + * ptep->pte_low = l; + * + * And present to not present goes: + * + * ptep->pte_low = 0; + * smp_wmb(); + * ptep->pte_high = 0; + * + * We must ensure here that the load of pte_low sees 'l' IFF pte_high sees 'h'. + * We load pte_high *after* loading pte_low, which ensures we don't see an older + * value of pte_high. *Then* we recheck pte_low, which ensures that we haven't + * picked up a changed pte high. We might have gotten rubbish values from + * pte_low and pte_high, but we are guaranteed that pte_low will not have the + * present bit set *unless* it is 'l'. Because get_user_pages_fast() only + * operates on present ptes we're safe. + */ +static inline pte_t gup_get_pte(pte_t *ptep) +{ + pte_t pte; -#ifndef gup_get_pte + do { + pte.pte_low = ptep->pte_low; + smp_rmb(); + pte.pte_high = ptep->pte_high; + smp_rmb(); + } while (unlikely(pte.pte_low != ptep->pte_low)); + + return pte; +} +#else /* CONFIG_GUP_GET_PTE_LOW_HIGH */ /* - * We assume that the PTE can be read atomically. If this is not the case for - * your architecture, please provide the helper. + * We require that the PTE can be read atomically. */ static inline pte_t gup_get_pte(pte_t *ptep) { return READ_ONCE(*ptep); } -#endif +#endif /* CONFIG_GUP_GET_PTE_LOW_HIGH */ static void undo_dev_pagemap(int *nr, int nr_start, struct page **pages) { From patchwork Tue Jun 25 14:37:03 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Christoph Hellwig X-Patchwork-Id: 11015681 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 2998A1398 for ; Tue, 25 Jun 2019 14:38:15 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 196B627F97 for ; Tue, 25 Jun 2019 14:38:15 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 0B5E82863C; Tue, 25 Jun 2019 14:38:15 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-2.7 required=2.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,MAILING_LIST_MULTI,RCVD_IN_DNSWL_NONE autolearn=unavailable version=3.3.1 Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 25E44283EE for ; Tue, 25 Jun 2019 14:38:14 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 4DA938E0002; Tue, 25 Jun 2019 10:38:00 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 31FAD8E000B; Tue, 25 Jun 2019 10:38:00 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 0D6578E0002; Tue, 25 Jun 2019 10:38:00 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from mail-pg1-f198.google.com (mail-pg1-f198.google.com [209.85.215.198]) by kanga.kvack.org (Postfix) with ESMTP id C28B78E0009 for ; Tue, 25 Jun 2019 10:37:59 -0400 (EDT) Received: by mail-pg1-f198.google.com with SMTP id c18so11784247pgk.2 for ; Tue, 25 Jun 2019 07:37:59 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:dkim-signature:from:to:cc:subject:date :message-id:in-reply-to:references:mime-version :content-transfer-encoding; bh=kVQ6mNoMCfZPycayXDad27TmaJmTpF/vld/tlL2xnx8=; b=b/0zqGvl836LCrgriOLSQWdKHkdgdvjbN0et9xm9eUevcKp9a/4BJe7lKZQ46fKolL q+LvyO08ekV8/kZhRk2ZGqEh+sUty3o26+YAdisggh2uXVyG6zRVn6xDQLsKEy7H/gKc bFfExP3mJ04gjiy9vz4MaM0/OtXptep46TJr05FnBgTPVEW91fMt4RqMkPsWuvQcMvVF 8IrDkZ93Jk2VIjZsNEPnG6EDwcGev3OWoK1PvVSo48OPXXjos5Wekll1hzCPD6OHM1Dk /yMckzwg6bCdxy5lvkNd+R+MDy4r5QeW5tpUEEcNn5rHWMWasLzqkAJu/J+QX+oJEaAk LxxQ== X-Gm-Message-State: APjAAAVK74RkwnrY7u2aII5Des7DYS72TVmKjlhDwTbd7JEPiQICqlhK 3bbE5TzAA7sZgkpPdkyMh9B4AOEcwVotjjb5ay/cJJ+zCgyPzcQtWj92HTTc9/JcmCEpw4E8xrO YOe5QOrncrGQdRtWdPI8hd2YIt0fT6GayVz+JXvUbjRZbUtkPVXZLng+LsDffJv4= X-Received: by 2002:a17:902:2926:: with SMTP id g35mr109433796plb.269.1561473479408; Tue, 25 Jun 2019 07:37:59 -0700 (PDT) X-Google-Smtp-Source: APXvYqzSUzMD1Qc0or5jH+rfvNjaHneiascenflXta8FacTJajxXb07uJoLSEYC0scb7tsC5jAOt X-Received: by 2002:a17:902:2926:: with SMTP id g35mr109433687plb.269.1561473478117; Tue, 25 Jun 2019 07:37:58 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1561473478; cv=none; d=google.com; s=arc-20160816; b=PZH4XE6e2fWs4ixrhu2MfKHn2xPUdOqRMmoCa/sG1On+2rwggnmJCsdemqMFx0pFhm tUD39TF44tIHUbzVwtHwgjsQl+new66Z28fRQENJ/kXUhhj2U6FRIwol9yPFrGnoUhTK +ikbmnR1SGbUsoCL6EL4m2urE/823ydy3+coFQO0J6pYgFnOypQSgoM0BTzaCGpC+CbU ckrquyuce2m52+FT1DWY2ctoG21AbVCVZiOlmE/npTtwx+fYTezb7bLxsLUzQ2UnJ+VE 9pe/KUqtfZWMFTKNBXIg8aewn/c7TvPWMB45mgQ6ah2CpHNf8/uVi5GR8jMouUjoYtip hMlg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:dkim-signature; bh=kVQ6mNoMCfZPycayXDad27TmaJmTpF/vld/tlL2xnx8=; b=q+w3CM6fT8vMCoaJ12ClWLRAhJvCUmtQjcNjW6GqYY2hLcQpJjJE/lNw3RxwUfNWWQ mjTGnTUqxs3zGpnScaedMJIqm0vrGPpQ/XwbOVxieKo/+ve/ZWuRbyjRc1JcCMK46ZQR dJ6xJuNTcs1IfOfV0+VA77oZ42K1/gzXxUik9IbHbgGdboUCnNqSEzHWMIZhw0G0Ig19 FRYOmwUVBwnCs3SApOaHSZqmHp6AuHtxPwZlkcYkJJPW7ANQGzOZ2ZW90sIxfn93UPeM H+/A/+zl4IydA7ZW2Bbw4aiED8d1DS9dyBZ7cWNIFm6imnQ4KE02tRdBQa2UbgA2/IeB xz9g== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@infradead.org header.s=bombadil.20170209 header.b=HCCJVntU; spf=pass (google.com: best guess record for domain of batv+c5155a46dc30cc8634d8+5784+infradead.org+hch@bombadil.srs.infradead.org designates 2607:7c80:54:e::133 as permitted sender) smtp.mailfrom=BATV+c5155a46dc30cc8634d8+5784+infradead.org+hch@bombadil.srs.infradead.org Received: from bombadil.infradead.org (bombadil.infradead.org. [2607:7c80:54:e::133]) by mx.google.com with ESMTPS id d18si8218005pgv.19.2019.06.25.07.37.57 for (version=TLS1_3 cipher=AEAD-AES256-GCM-SHA384 bits=256/256); Tue, 25 Jun 2019 07:37:58 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of batv+c5155a46dc30cc8634d8+5784+infradead.org+hch@bombadil.srs.infradead.org designates 2607:7c80:54:e::133 as permitted sender) client-ip=2607:7c80:54:e::133; Authentication-Results: mx.google.com; dkim=pass header.i=@infradead.org header.s=bombadil.20170209 header.b=HCCJVntU; spf=pass (google.com: best guess record for domain of batv+c5155a46dc30cc8634d8+5784+infradead.org+hch@bombadil.srs.infradead.org designates 2607:7c80:54:e::133 as permitted sender) smtp.mailfrom=BATV+c5155a46dc30cc8634d8+5784+infradead.org+hch@bombadil.srs.infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=bombadil.20170209; h=Content-Transfer-Encoding: MIME-Version:References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender :Reply-To:Content-Type:Content-ID:Content-Description:Resent-Date:Resent-From :Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Id:List-Help: List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive; bh=kVQ6mNoMCfZPycayXDad27TmaJmTpF/vld/tlL2xnx8=; b=HCCJVntUkzrP9XxgkGiGqUoaoE khEqw2X8hMNAjEI73K7N0+g1S8LSBgtQI5Ct5hJ0YEzMsgJjokiA1E1ZKJ9DA5EfCs3z7Wv0Nd35Y PfM2KilXznToVqvKw1jMFXUcz+I/rchyVFeFmtMJPIOVN1w5+LGC39ry+0UQ2/qN0wiYagZPGK+h4 VoJ7F/F9BRAA5B+GpP1kM8fzBj74maeZTaRps517YIKgu34+XVdKAfJDnaN+xb3rYYtXA0LA1Nkir HbHKUtrwA+f8b92wnM78bTAgbPS5we6kHc8uOPplqrQaibV3K7vvDTZ+gpUebNyZ7LuXTvueBleXo zvLOmTEA==; Received: from 213-225-6-159.nat.highway.a1.net ([213.225.6.159] helo=localhost) by bombadil.infradead.org with esmtpsa (Exim 4.92 #3 (Red Hat Linux)) id 1hfmZe-0007ya-I7; Tue, 25 Jun 2019 14:37:31 +0000 From: Christoph Hellwig To: Andrew Morton , Linus Torvalds , Paul Burton , James Hogan , Yoshinori Sato , Rich Felker , "David S. Miller" Cc: Nicholas Piggin , Khalid Aziz , Andrey Konovalov , Benjamin Herrenschmidt , Paul Mackerras , Michael Ellerman , linux-mips@vger.kernel.org, linux-sh@vger.kernel.org, sparclinux@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, linux-mm@kvack.org, x86@kernel.org, linux-kernel@vger.kernel.org, Jason Gunthorpe Subject: [PATCH 04/16] MIPS: use the generic get_user_pages_fast code Date: Tue, 25 Jun 2019 16:37:03 +0200 Message-Id: <20190625143715.1689-5-hch@lst.de> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20190625143715.1689-1-hch@lst.de> References: <20190625143715.1689-1-hch@lst.de> MIME-Version: 1.0 X-SRS-Rewrite: SMTP reverse-path rewritten from by bombadil.infradead.org. See http://www.infradead.org/rpr.html X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: X-Virus-Scanned: ClamAV using ClamSMTP The mips code is mostly equivalent to the generic one, minus various bugfixes and an arch override for gup_fast_permitted. Note that this defines ARCH_HAS_PTE_SPECIAL for mips as mips has pte_special and pte_mkspecial implemented and used in the existing gup code. They are no-op stubs, though which makes me a little unsure if this is really right thing to do. Note that this also adds back a missing cpu_has_dc_aliases check for __get_user_pages_fast, which the old code was only doing for get_user_pages_fast. This clearly looks like an oversight, as any condition that makes get_user_pages_fast unsafe also applies to __get_user_pages_fast. Signed-off-by: Christoph Hellwig Reviewed-by: Jason Gunthorpe --- arch/mips/Kconfig | 3 + arch/mips/include/asm/pgtable.h | 3 + arch/mips/mm/Makefile | 1 - arch/mips/mm/gup.c | 303 -------------------------------- 4 files changed, 6 insertions(+), 304 deletions(-) delete mode 100644 arch/mips/mm/gup.c diff --git a/arch/mips/Kconfig b/arch/mips/Kconfig index 70d3200476bf..64108a2a16d4 100644 --- a/arch/mips/Kconfig +++ b/arch/mips/Kconfig @@ -6,6 +6,7 @@ config MIPS select ARCH_BINFMT_ELF_STATE if MIPS_FP_SUPPORT select ARCH_CLOCKSOURCE_DATA select ARCH_HAS_ELF_RANDOMIZE + select ARCH_HAS_PTE_SPECIAL select ARCH_HAS_TICK_BROADCAST if GENERIC_CLOCKEVENTS_BROADCAST select ARCH_HAS_UBSAN_SANITIZE_ALL select ARCH_SUPPORTS_UPROBES @@ -34,6 +35,7 @@ config MIPS select GENERIC_SCHED_CLOCK if !CAVIUM_OCTEON_SOC select GENERIC_SMP_IDLE_THREAD select GENERIC_TIME_VSYSCALL + select GUP_GET_PTE_LOW_HIGH if CPU_MIPS32 && PHYS_ADDR_T_64BIT select HANDLE_DOMAIN_IRQ select HAVE_ARCH_COMPILER_H select HAVE_ARCH_JUMP_LABEL @@ -55,6 +57,7 @@ config MIPS select HAVE_FTRACE_MCOUNT_RECORD select HAVE_FUNCTION_GRAPH_TRACER select HAVE_FUNCTION_TRACER + select HAVE_GENERIC_GUP select HAVE_IDE select HAVE_IOREMAP_PROT select HAVE_IRQ_EXIT_ON_IRQ_STACK diff --git a/arch/mips/include/asm/pgtable.h b/arch/mips/include/asm/pgtable.h index 4ccb465ef3f2..7d27194e3b45 100644 --- a/arch/mips/include/asm/pgtable.h +++ b/arch/mips/include/asm/pgtable.h @@ -20,6 +20,7 @@ #include #include #include +#include struct mm_struct; struct vm_area_struct; @@ -626,6 +627,8 @@ static inline pmd_t pmdp_huge_get_and_clear(struct mm_struct *mm, #endif /* CONFIG_TRANSPARENT_HUGEPAGE */ +#define gup_fast_permitted(start, end) (!cpu_has_dc_aliases) + #include /* diff --git a/arch/mips/mm/Makefile b/arch/mips/mm/Makefile index f34d7ff5eb60..1e8d335025d7 100644 --- a/arch/mips/mm/Makefile +++ b/arch/mips/mm/Makefile @@ -7,7 +7,6 @@ obj-y += cache.o obj-y += context.o obj-y += extable.o obj-y += fault.o -obj-y += gup.o obj-y += init.o obj-y += mmap.o obj-y += page.o diff --git a/arch/mips/mm/gup.c b/arch/mips/mm/gup.c deleted file mode 100644 index 4c2b4483683c..000000000000 --- a/arch/mips/mm/gup.c +++ /dev/null @@ -1,303 +0,0 @@ -// SPDX-License-Identifier: GPL-2.0 -/* - * Lockless get_user_pages_fast for MIPS - * - * Copyright (C) 2008 Nick Piggin - * Copyright (C) 2008 Novell Inc. - * Copyright (C) 2011 Ralf Baechle - */ -#include -#include -#include -#include -#include -#include - -#include -#include - -static inline pte_t gup_get_pte(pte_t *ptep) -{ -#if defined(CONFIG_PHYS_ADDR_T_64BIT) && defined(CONFIG_CPU_MIPS32) - pte_t pte; - -retry: - pte.pte_low = ptep->pte_low; - smp_rmb(); - pte.pte_high = ptep->pte_high; - smp_rmb(); - if (unlikely(pte.pte_low != ptep->pte_low)) - goto retry; - - return pte; -#else - return READ_ONCE(*ptep); -#endif -} - -static int gup_pte_range(pmd_t pmd, unsigned long addr, unsigned long end, - int write, struct page **pages, int *nr) -{ - pte_t *ptep = pte_offset_map(&pmd, addr); - do { - pte_t pte = gup_get_pte(ptep); - struct page *page; - - if (!pte_present(pte) || - pte_special(pte) || (write && !pte_write(pte))) { - pte_unmap(ptep); - return 0; - } - VM_BUG_ON(!pfn_valid(pte_pfn(pte))); - page = pte_page(pte); - get_page(page); - SetPageReferenced(page); - pages[*nr] = page; - (*nr)++; - - } while (ptep++, addr += PAGE_SIZE, addr != end); - - pte_unmap(ptep - 1); - return 1; -} - -static inline void get_head_page_multiple(struct page *page, int nr) -{ - VM_BUG_ON(page != compound_head(page)); - VM_BUG_ON(page_count(page) == 0); - page_ref_add(page, nr); - SetPageReferenced(page); -} - -static int gup_huge_pmd(pmd_t pmd, unsigned long addr, unsigned long end, - int write, struct page **pages, int *nr) -{ - pte_t pte = *(pte_t *)&pmd; - struct page *head, *page; - int refs; - - if (write && !pte_write(pte)) - return 0; - /* hugepages are never "special" */ - VM_BUG_ON(pte_special(pte)); - VM_BUG_ON(!pfn_valid(pte_pfn(pte))); - - refs = 0; - head = pte_page(pte); - page = head + ((addr & ~PMD_MASK) >> PAGE_SHIFT); - do { - VM_BUG_ON(compound_head(page) != head); - pages[*nr] = page; - (*nr)++; - page++; - refs++; - } while (addr += PAGE_SIZE, addr != end); - - get_head_page_multiple(head, refs); - return 1; -} - -static int gup_pmd_range(pud_t pud, unsigned long addr, unsigned long end, - int write, struct page **pages, int *nr) -{ - unsigned long next; - pmd_t *pmdp; - - pmdp = pmd_offset(&pud, addr); - do { - pmd_t pmd = *pmdp; - - next = pmd_addr_end(addr, end); - if (pmd_none(pmd)) - return 0; - if (unlikely(pmd_huge(pmd))) { - if (!gup_huge_pmd(pmd, addr, next, write, pages,nr)) - return 0; - } else { - if (!gup_pte_range(pmd, addr, next, write, pages,nr)) - return 0; - } - } while (pmdp++, addr = next, addr != end); - - return 1; -} - -static int gup_huge_pud(pud_t pud, unsigned long addr, unsigned long end, - int write, struct page **pages, int *nr) -{ - pte_t pte = *(pte_t *)&pud; - struct page *head, *page; - int refs; - - if (write && !pte_write(pte)) - return 0; - /* hugepages are never "special" */ - VM_BUG_ON(pte_special(pte)); - VM_BUG_ON(!pfn_valid(pte_pfn(pte))); - - refs = 0; - head = pte_page(pte); - page = head + ((addr & ~PUD_MASK) >> PAGE_SHIFT); - do { - VM_BUG_ON(compound_head(page) != head); - pages[*nr] = page; - (*nr)++; - page++; - refs++; - } while (addr += PAGE_SIZE, addr != end); - - get_head_page_multiple(head, refs); - return 1; -} - -static int gup_pud_range(pgd_t pgd, unsigned long addr, unsigned long end, - int write, struct page **pages, int *nr) -{ - unsigned long next; - pud_t *pudp; - - pudp = pud_offset(&pgd, addr); - do { - pud_t pud = *pudp; - - next = pud_addr_end(addr, end); - if (pud_none(pud)) - return 0; - if (unlikely(pud_huge(pud))) { - if (!gup_huge_pud(pud, addr, next, write, pages,nr)) - return 0; - } else { - if (!gup_pmd_range(pud, addr, next, write, pages,nr)) - return 0; - } - } while (pudp++, addr = next, addr != end); - - return 1; -} - -/* - * Like get_user_pages_fast() except its IRQ-safe in that it won't fall - * back to the regular GUP. - * Note a difference with get_user_pages_fast: this always returns the - * number of pages pinned, 0 if no pages were pinned. - */ -int __get_user_pages_fast(unsigned long start, int nr_pages, int write, - struct page **pages) -{ - struct mm_struct *mm = current->mm; - unsigned long addr, len, end; - unsigned long next; - unsigned long flags; - pgd_t *pgdp; - int nr = 0; - - start &= PAGE_MASK; - addr = start; - len = (unsigned long) nr_pages << PAGE_SHIFT; - end = start + len; - if (unlikely(!access_ok((void __user *)start, len))) - return 0; - - /* - * XXX: batch / limit 'nr', to avoid large irq off latency - * needs some instrumenting to determine the common sizes used by - * important workloads (eg. DB2), and whether limiting the batch - * size will decrease performance. - * - * It seems like we're in the clear for the moment. Direct-IO is - * the main guy that batches up lots of get_user_pages, and even - * they are limited to 64-at-a-time which is not so many. - */ - /* - * This doesn't prevent pagetable teardown, but does prevent - * the pagetables and pages from being freed. - * - * So long as we atomically load page table pointers versus teardown, - * we can follow the address down to the page and take a ref on it. - */ - local_irq_save(flags); - pgdp = pgd_offset(mm, addr); - do { - pgd_t pgd = *pgdp; - - next = pgd_addr_end(addr, end); - if (pgd_none(pgd)) - break; - if (!gup_pud_range(pgd, addr, next, write, pages, &nr)) - break; - } while (pgdp++, addr = next, addr != end); - local_irq_restore(flags); - - return nr; -} - -/** - * get_user_pages_fast() - pin user pages in memory - * @start: starting user address - * @nr_pages: number of pages from start to pin - * @gup_flags: flags modifying pin behaviour - * @pages: array that receives pointers to the pages pinned. - * Should be at least nr_pages long. - * - * Attempt to pin user pages in memory without taking mm->mmap_sem. - * If not successful, it will fall back to taking the lock and - * calling get_user_pages(). - * - * Returns number of pages pinned. This may be fewer than the number - * requested. If nr_pages is 0 or negative, returns 0. If no pages - * were pinned, returns -errno. - */ -int get_user_pages_fast(unsigned long start, int nr_pages, - unsigned int gup_flags, struct page **pages) -{ - struct mm_struct *mm = current->mm; - unsigned long addr, len, end; - unsigned long next; - pgd_t *pgdp; - int ret, nr = 0; - - start &= PAGE_MASK; - addr = start; - len = (unsigned long) nr_pages << PAGE_SHIFT; - - end = start + len; - if (end < start || cpu_has_dc_aliases) - goto slow_irqon; - - /* XXX: batch / limit 'nr' */ - local_irq_disable(); - pgdp = pgd_offset(mm, addr); - do { - pgd_t pgd = *pgdp; - - next = pgd_addr_end(addr, end); - if (pgd_none(pgd)) - goto slow; - if (!gup_pud_range(pgd, addr, next, gup_flags & FOLL_WRITE, - pages, &nr)) - goto slow; - } while (pgdp++, addr = next, addr != end); - local_irq_enable(); - - VM_BUG_ON(nr != (end - start) >> PAGE_SHIFT); - return nr; -slow: - local_irq_enable(); - -slow_irqon: - /* Try to get the remaining pages with get_user_pages */ - start += nr << PAGE_SHIFT; - pages += nr; - - ret = get_user_pages_unlocked(start, (end - start) >> PAGE_SHIFT, - pages, gup_flags); - - /* Have to be a bit careful with return values */ - if (nr > 0) { - if (ret < 0) - ret = nr; - else - ret += nr; - } - return ret; -} From patchwork Tue Jun 25 14:37:04 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Christoph Hellwig X-Patchwork-Id: 11015677 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id C5F9A112C for ; Tue, 25 Jun 2019 14:38:07 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id B899727F97 for ; Tue, 25 Jun 2019 14:38:07 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id AC6792863C; Tue, 25 Jun 2019 14:38:07 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-2.7 required=2.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,MAILING_LIST_MULTI,RCVD_IN_DNSWL_NONE autolearn=unavailable version=3.3.1 Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 5C64D27F97 for ; Tue, 25 Jun 2019 14:38:07 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id DAB3A8E0008; Tue, 25 Jun 2019 10:37:58 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id D62478E0002; Tue, 25 Jun 2019 10:37:58 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id C20FA8E0008; Tue, 25 Jun 2019 10:37:58 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from mail-pl1-f197.google.com (mail-pl1-f197.google.com [209.85.214.197]) by kanga.kvack.org (Postfix) with ESMTP id 8763F8E0002 for ; Tue, 25 Jun 2019 10:37:58 -0400 (EDT) Received: by mail-pl1-f197.google.com with SMTP id d2so9297819pla.18 for ; Tue, 25 Jun 2019 07:37:58 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:dkim-signature:from:to:cc:subject:date :message-id:in-reply-to:references:mime-version :content-transfer-encoding; bh=NpWcgZzp3PVAHx6KOALQeB51c2SoNPG0lnYZA6y2f1k=; b=ZDz+izpTnM9+0ylSnAwUZm6M2S1hZvfeisOkVKJyUPpiEdz3GdK3KXn3hbPixGwxUI ej7UFGa9p+kZSlT2+G45n5SHTWyJMWQpeY34OaSANKh3DNwSKECb8ZSN5/UKLDmyUd9E yDcV6hQhzyHg8PalqRnXzU3ghkad297dEGYfxtGDiFKGTqefn3hAUJK5KqnnUmd787R3 rdTSuy7sxYUKVKOjC30A+UkW19aPDpyyYZi6z6h467xgYfZLj1HNHKdaZmO+3ER2QqzD OMHnc0alriNhnCL7HU4oF+HWu+877sux5XKKbrjs4uNUjY4cqkx7Qys6h6yyZeq++/C4 sHFg== X-Gm-Message-State: APjAAAViYzvDbMISUGqdjYlFwtY4zg2QuE/C4WPXiIlfYNORALJ6cLmd Bx2rG+1olW+Hqw6Y1ZkCANpthBEFO+tUmabgaMqeeEEarwDTXr0idkQP466W2CZHFT+TfLHfkk1 cbVM8Crjx6UCljURllQVZc5ptI3rIorDYUWeaSi9OvqxgGe10MABMb12MyXlTf+E= X-Received: by 2002:a63:2ad5:: with SMTP id q204mr34586535pgq.140.1561473478058; Tue, 25 Jun 2019 07:37:58 -0700 (PDT) X-Google-Smtp-Source: APXvYqzMZVno4g0bcJbOymzyqQzLBLNphZ8rhBb9vHgWIR/KEz8FNpzSfCvZXIZ4sPXyKg3RAkqw X-Received: by 2002:a63:2ad5:: with SMTP id q204mr34586478pgq.140.1561473477339; Tue, 25 Jun 2019 07:37:57 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1561473477; cv=none; d=google.com; s=arc-20160816; b=NnQ33YrKUKSjemRf5AiWV1rA/vP+SxqW1vfGIVm2bJjUoyjv9ZAIMxmYTVl+dn1ncI /Qh+V3FvrCbJWJLYFho7nGQOm3okg5DVcnfZjyfbamiQd1GnuVQoTQeznNJDLeYaY4n1 h/ltP/KQA9PihOrACJZm8O0ziRn5OafzSS7DGWCrk6HiFmklN3U8Ahx9VrL6rQD14g9a hhyjejT1X0Y8CSp6C9rV8TFyRM46aVNsSmcGZfU6cJ9EjHzcnnZp8H6zHWvywpKeT+nr xZoHPqHfdYpqqHp+SDbKW3t5T397VxuYJeWTk4uVrXClv80rccho4/Is7WdkIN727Vlh ZU2g== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:dkim-signature; bh=NpWcgZzp3PVAHx6KOALQeB51c2SoNPG0lnYZA6y2f1k=; b=NNqKPrq/L47F//1xab+dHJsLZgYOzfV8EHG8s1W0TqkU1nFoLb5ZfoKzid8iVXx/sr npHGDTj/4CA/TRnhCyfgnSO3AsrOlibgYVOdw5E2TMX1kLflllJmXJbCuFDvm6rQU6p2 YR7RvmSCWHjRc8ebxXVhVxVZmc0FiqnlqzLesmbBVfDTCEfTA8MH7UVUEJgacuinRK1s AnQLL13OWX5klHp8i5ud+m3seDzngPPrkSEC87dZH85x+N48bl17Yov3WCgvoB75Cea1 96G3HcTDzGZ4lbCD7InXXbelHrvfT6mF8eFX8ixiWu9MI3NWp0/KPx/3kKDxPRQ+HMOm 9R6w== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@infradead.org header.s=bombadil.20170209 header.b=Keh2XF04; spf=pass (google.com: best guess record for domain of batv+c5155a46dc30cc8634d8+5784+infradead.org+hch@bombadil.srs.infradead.org designates 2607:7c80:54:e::133 as permitted sender) smtp.mailfrom=BATV+c5155a46dc30cc8634d8+5784+infradead.org+hch@bombadil.srs.infradead.org Received: from bombadil.infradead.org (bombadil.infradead.org. [2607:7c80:54:e::133]) by mx.google.com with ESMTPS id a4si459032plm.209.2019.06.25.07.37.57 for (version=TLS1_3 cipher=AEAD-AES256-GCM-SHA384 bits=256/256); Tue, 25 Jun 2019 07:37:57 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of batv+c5155a46dc30cc8634d8+5784+infradead.org+hch@bombadil.srs.infradead.org designates 2607:7c80:54:e::133 as permitted sender) client-ip=2607:7c80:54:e::133; Authentication-Results: mx.google.com; dkim=pass header.i=@infradead.org header.s=bombadil.20170209 header.b=Keh2XF04; spf=pass (google.com: best guess record for domain of batv+c5155a46dc30cc8634d8+5784+infradead.org+hch@bombadil.srs.infradead.org designates 2607:7c80:54:e::133 as permitted sender) smtp.mailfrom=BATV+c5155a46dc30cc8634d8+5784+infradead.org+hch@bombadil.srs.infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=bombadil.20170209; h=Content-Transfer-Encoding: MIME-Version:References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender :Reply-To:Content-Type:Content-ID:Content-Description:Resent-Date:Resent-From :Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Id:List-Help: List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive; bh=NpWcgZzp3PVAHx6KOALQeB51c2SoNPG0lnYZA6y2f1k=; b=Keh2XF04rYJBlmYFK/V1pjf4+Z GacaiK2qKGFfFq0VOKJups+BoxTxOncbwrZf0/xWJuKoUiWG4YeYFfpr3SiYPoJC8U/dbGoUDXmyh XIdVBUnEB0YY3qxphb4Xtcu0GQlXcjQew7pmLqpaNn/kXHgTEASgSMLqIJh3MhS35P0Za/HGZyMjr bx4eNn/zmrRqq4U02dtZoX8zEDxCVGc2XmtVSGuAvMVp5bGINxS2F59woTRzV/RuoNyz1ePLP0hEg cn6P9VkulgFCyClnJ775lejevCQsT9Y4SEwxzCPEQqFOHpoNw4Qxgy0KbynZrQNRTYqWNNv0GXvl9 Itq0FMAg==; Received: from 213-225-6-159.nat.highway.a1.net ([213.225.6.159] helo=localhost) by bombadil.infradead.org with esmtpsa (Exim 4.92 #3 (Red Hat Linux)) id 1hfmZh-0007z1-O1; Tue, 25 Jun 2019 14:37:34 +0000 From: Christoph Hellwig To: Andrew Morton , Linus Torvalds , Paul Burton , James Hogan , Yoshinori Sato , Rich Felker , "David S. Miller" Cc: Nicholas Piggin , Khalid Aziz , Andrey Konovalov , Benjamin Herrenschmidt , Paul Mackerras , Michael Ellerman , linux-mips@vger.kernel.org, linux-sh@vger.kernel.org, sparclinux@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, linux-mm@kvack.org, x86@kernel.org, linux-kernel@vger.kernel.org Subject: [PATCH 05/16] sh: add the missing pud_page definition Date: Tue, 25 Jun 2019 16:37:04 +0200 Message-Id: <20190625143715.1689-6-hch@lst.de> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20190625143715.1689-1-hch@lst.de> References: <20190625143715.1689-1-hch@lst.de> MIME-Version: 1.0 X-SRS-Rewrite: SMTP reverse-path rewritten from by bombadil.infradead.org. See http://www.infradead.org/rpr.html X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: X-Virus-Scanned: ClamAV using ClamSMTP sh only had pud_page_vaddr, but not pud_page. Signed-off-by: Christoph Hellwig --- arch/sh/include/asm/pgtable-3level.h | 1 + 1 file changed, 1 insertion(+) diff --git a/arch/sh/include/asm/pgtable-3level.h b/arch/sh/include/asm/pgtable-3level.h index 7d8587eb65ff..3c7ff20f3f94 100644 --- a/arch/sh/include/asm/pgtable-3level.h +++ b/arch/sh/include/asm/pgtable-3level.h @@ -37,6 +37,7 @@ static inline unsigned long pud_page_vaddr(pud_t pud) { return pud_val(pud); } +#define pud_page(pud) pfn_to_page(pud_pfn(pud)) #define pmd_index(address) (((address) >> PMD_SHIFT) & (PTRS_PER_PMD-1)) static inline pmd_t *pmd_offset(pud_t *pud, unsigned long address) From patchwork Tue Jun 25 14:37:05 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Christoph Hellwig X-Patchwork-Id: 11015671 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 410DD112C for ; Tue, 25 Jun 2019 14:38:02 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 30D2A27F4B for ; Tue, 25 Jun 2019 14:38:02 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 2449D28640; Tue, 25 Jun 2019 14:38:02 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-2.7 required=2.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,MAILING_LIST_MULTI,RCVD_IN_DNSWL_NONE autolearn=ham version=3.3.1 Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 3097D27F4B for ; Tue, 25 Jun 2019 14:38:01 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id E1D238E0006; Tue, 25 Jun 2019 10:37:53 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id DCEC38E0007; Tue, 25 Jun 2019 10:37:53 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id BFBC58E0006; Tue, 25 Jun 2019 10:37:53 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from mail-pl1-f200.google.com (mail-pl1-f200.google.com [209.85.214.200]) by kanga.kvack.org (Postfix) with ESMTP id 839ED8E0002 for ; Tue, 25 Jun 2019 10:37:53 -0400 (EDT) Received: by mail-pl1-f200.google.com with SMTP id b24so9290079plz.20 for ; Tue, 25 Jun 2019 07:37:53 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:dkim-signature:from:to:cc:subject:date :message-id:in-reply-to:references:mime-version :content-transfer-encoding; bh=+LfGuMfatgroW0jiS20tFtJAd2NqewOpaHqEsGODANw=; b=dRNpEklMeq6D2TMWNvpN3qCJEl0O8dbcwMchyV8LKkuv9lAumhM/Tl5URyXgXefC3a MxEFPFTuWN8GKvm/CO9P7BOF0rxPm46YUlgFhEdigoleydeXOBbG7sURjYqtudWOhFS4 WZ8Z5Vme6lKbacoVtKXaINufruPv69EUHAzE0ja6GLzkm5v2fLEM3UL4TpNjvPnlw1dS LsySNgZ9vA0/Bt2xUM6BcdJsW9uiobKg+YULnjlVVsEgM4a7kGDnQZywZ4ZRtvIqo12g TJ4A1qYP9aR7kWj0E3Meep8teAjgwP310OwvEB3ln3npLO6I3JX8zOKPiE+PbB+iRows Tj0A== X-Gm-Message-State: APjAAAWJHMrM8Vw2u1MmWxMqaP63mE8Q3gwKYobGPXqj1XC2G4c8VV91 rOlntSYdNaalDN6NZ2M3Qy2aygUSaevWHuw2ZNMcJoT0Hp7yI+cyg8qTRl/7xxNveZ0XOugFKgu vy08EpZ2fxtyWeOv3DKEigxf8H6d3GfwkjTTkZLRi3LTH6XSBWZoIQEaru6UUl+s= X-Received: by 2002:a17:90a:9382:: with SMTP id q2mr32628639pjo.131.1561473473140; Tue, 25 Jun 2019 07:37:53 -0700 (PDT) X-Google-Smtp-Source: APXvYqxwErE5k+rAUG4Rj0eQ1zzGEJjH5kskQKrvoo6RPWIioVwiSjgt5bELtXxVv9QPVCw6J5AB X-Received: by 2002:a17:90a:9382:: with SMTP id q2mr32628512pjo.131.1561473471846; Tue, 25 Jun 2019 07:37:51 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1561473471; cv=none; d=google.com; s=arc-20160816; b=daygPJnwpw0HCfEcck1fFMzryZsPclhLbK9lY39KbYfsOYzz+IEa4wQXzaY8EW52b4 YdHu4sWDs7wudbdf3arHgBzM6PnbHzWS8F9vQFPScXWAQCmWlslYp34Q45VTNMSzAsW9 CDUrio43YMsBhknEN5DFbq+o1cIjzZMUU0hGjDyLMDkvANBKD6slP6K4y9Ou2QrWtYk9 V6eYYSsoX9j9tuUtQS8BtFw9PEKcCJw5yaA6JvWsCRW72iFPz4nd7aWwUZJ69wEqCmM6 2S2pYD94o+6Zk627NUJNVTnyOk5srmIcsHevl94OYBGLygrB2xfmbi5tpR2YEQrP25ue Z/oQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:dkim-signature; bh=+LfGuMfatgroW0jiS20tFtJAd2NqewOpaHqEsGODANw=; b=cNYTqlXIf1y2Cjb3L0qtxk299oL/MNs4dqTFC07Cz/s1dr0O+LX37yjlse5TncNmDN O/kVsApvXVxTHDqTLO9MRijW9GRDcp3q0G8VkN4LhGFHblFE0nx/cVe5F7dxLYfSZDU8 qGVYmXwbz4cqfuUEdiQEt3jSQ/0I9j0Ch98ncXezA5qxPUp5EU4mURzi7d31O5oAsQ98 Jc8lFx+ktnrnAEVgKZpShyUzu+NE/RoyKerNPbQ0iy6ldw/Y43cg56Pg21gx7SMuu9VT 4rhFFjXmy5JMEM/pKk6zKn0my8AZHJ7V488SjGeImjEM+hDeqMWgETQlgxpOzFsxC0yQ 2nhA== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@infradead.org header.s=bombadil.20170209 header.b=LrZgImpK; spf=pass (google.com: best guess record for domain of batv+c5155a46dc30cc8634d8+5784+infradead.org+hch@bombadil.srs.infradead.org designates 2607:7c80:54:e::133 as permitted sender) smtp.mailfrom=BATV+c5155a46dc30cc8634d8+5784+infradead.org+hch@bombadil.srs.infradead.org Received: from bombadil.infradead.org (bombadil.infradead.org. [2607:7c80:54:e::133]) by mx.google.com with ESMTPS id v99si2903076pjb.82.2019.06.25.07.37.51 for (version=TLS1_3 cipher=AEAD-AES256-GCM-SHA384 bits=256/256); Tue, 25 Jun 2019 07:37:51 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of batv+c5155a46dc30cc8634d8+5784+infradead.org+hch@bombadil.srs.infradead.org designates 2607:7c80:54:e::133 as permitted sender) client-ip=2607:7c80:54:e::133; Authentication-Results: mx.google.com; dkim=pass header.i=@infradead.org header.s=bombadil.20170209 header.b=LrZgImpK; spf=pass (google.com: best guess record for domain of batv+c5155a46dc30cc8634d8+5784+infradead.org+hch@bombadil.srs.infradead.org designates 2607:7c80:54:e::133 as permitted sender) smtp.mailfrom=BATV+c5155a46dc30cc8634d8+5784+infradead.org+hch@bombadil.srs.infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=bombadil.20170209; h=Content-Transfer-Encoding: MIME-Version:References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender :Reply-To:Content-Type:Content-ID:Content-Description:Resent-Date:Resent-From :Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Id:List-Help: List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive; bh=+LfGuMfatgroW0jiS20tFtJAd2NqewOpaHqEsGODANw=; b=LrZgImpKi3oHFL6XF+vRRKC03v +3Ybl1kLCPZx6avRAKWmHM/z6o3DEO/u13MRuV3McCuZKOFRyPvHRfwfYatSEdVMSIWVjCOz6j5DK +Z1f6G79yaFJW3on4q1flzR3iysJNFF6jskMOsdOELq1bIa+DHvjH7zGBDXwmq29/hreciaM1Mrz9 eJiUh+sw1rblamsDD61hVJqoJncrqn4/dl0ZBxuUHneVbEbzkikC+qsm0YqEvdb4WP2UInlo33nuK j3ocG4UwRXuv3qgBpHa+7OCGkmwA0dmhehq8/ZZl7xEU//gIUmgO9iCLbcL4oTyGcMn+UJLgpYFGH o5uvzvYA==; Received: from 213-225-6-159.nat.highway.a1.net ([213.225.6.159] helo=localhost) by bombadil.infradead.org with esmtpsa (Exim 4.92 #3 (Red Hat Linux)) id 1hfmZk-0007zP-U3; Tue, 25 Jun 2019 14:37:37 +0000 From: Christoph Hellwig To: Andrew Morton , Linus Torvalds , Paul Burton , James Hogan , Yoshinori Sato , Rich Felker , "David S. Miller" Cc: Nicholas Piggin , Khalid Aziz , Andrey Konovalov , Benjamin Herrenschmidt , Paul Mackerras , Michael Ellerman , linux-mips@vger.kernel.org, linux-sh@vger.kernel.org, sparclinux@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, linux-mm@kvack.org, x86@kernel.org, linux-kernel@vger.kernel.org Subject: [PATCH 06/16] sh: use the generic get_user_pages_fast code Date: Tue, 25 Jun 2019 16:37:05 +0200 Message-Id: <20190625143715.1689-7-hch@lst.de> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20190625143715.1689-1-hch@lst.de> References: <20190625143715.1689-1-hch@lst.de> MIME-Version: 1.0 X-SRS-Rewrite: SMTP reverse-path rewritten from by bombadil.infradead.org. See http://www.infradead.org/rpr.html X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: X-Virus-Scanned: ClamAV using ClamSMTP The sh code is mostly equivalent to the generic one, minus various bugfixes and two arch overrides that this patch adds to pgtable.h. Signed-off-by: Christoph Hellwig --- arch/sh/Kconfig | 2 + arch/sh/include/asm/pgtable.h | 37 +++++ arch/sh/mm/Makefile | 2 +- arch/sh/mm/gup.c | 277 ---------------------------------- 4 files changed, 40 insertions(+), 278 deletions(-) delete mode 100644 arch/sh/mm/gup.c diff --git a/arch/sh/Kconfig b/arch/sh/Kconfig index b77f512bb176..6fddfc3c9710 100644 --- a/arch/sh/Kconfig +++ b/arch/sh/Kconfig @@ -14,6 +14,7 @@ config SUPERH select HAVE_ARCH_TRACEHOOK select HAVE_PERF_EVENTS select HAVE_DEBUG_BUGVERBOSE + select HAVE_GENERIC_GUP select ARCH_HAVE_CUSTOM_GPIO_H select ARCH_HAVE_NMI_SAFE_CMPXCHG if (GUSA_RB || CPU_SH4A) select ARCH_HAS_GCOV_PROFILE_ALL @@ -63,6 +64,7 @@ config SUPERH config SUPERH32 def_bool "$(ARCH)" = "sh" select ARCH_32BIT_OFF_T + select GUP_GET_PTE_LOW_HIGH if X2TLB select HAVE_KPROBES select HAVE_KRETPROBES select HAVE_IOREMAP_PROT if MMU && !X2TLB diff --git a/arch/sh/include/asm/pgtable.h b/arch/sh/include/asm/pgtable.h index 3587103afe59..9085d1142fa3 100644 --- a/arch/sh/include/asm/pgtable.h +++ b/arch/sh/include/asm/pgtable.h @@ -149,6 +149,43 @@ extern void paging_init(void); extern void page_table_range_init(unsigned long start, unsigned long end, pgd_t *pgd); +static inline bool __pte_access_permitted(pte_t pte, u64 prot) +{ + return (pte_val(pte) & (prot | _PAGE_SPECIAL)) == prot; +} + +#ifdef CONFIG_X2TLB +static inline bool pte_access_permitted(pte_t pte, bool write) +{ + u64 prot = _PAGE_PRESENT; + + prot |= _PAGE_EXT(_PAGE_EXT_KERN_READ | _PAGE_EXT_USER_READ); + if (write) + prot |= _PAGE_EXT(_PAGE_EXT_KERN_WRITE | _PAGE_EXT_USER_WRITE); + return __pte_access_permitted(pte, prot); +} +#elif defined(CONFIG_SUPERH64) +static inline bool pte_access_permitted(pte_t pte, bool write) +{ + u64 prot = _PAGE_PRESENT | _PAGE_USER | _PAGE_READ; + + if (write) + prot |= _PAGE_WRITE; + return __pte_access_permitted(pte, prot); +} +#else +static inline bool pte_access_permitted(pte_t pte, bool write) +{ + u64 prot = _PAGE_PRESENT | _PAGE_USER; + + if (write) + prot |= _PAGE_RW; + return __pte_access_permitted(pte, prot); +} +#endif + +#define pte_access_permitted pte_access_permitted + /* arch/sh/mm/mmap.c */ #define HAVE_ARCH_UNMAPPED_AREA #define HAVE_ARCH_UNMAPPED_AREA_TOPDOWN diff --git a/arch/sh/mm/Makefile b/arch/sh/mm/Makefile index fbe5e79751b3..5051b38fd5b6 100644 --- a/arch/sh/mm/Makefile +++ b/arch/sh/mm/Makefile @@ -17,7 +17,7 @@ cacheops-$(CONFIG_CPU_SHX3) += cache-shx3.o obj-y += $(cacheops-y) mmu-y := nommu.o extable_32.o -mmu-$(CONFIG_MMU) := extable_$(BITS).o fault.o gup.o ioremap.o kmap.o \ +mmu-$(CONFIG_MMU) := extable_$(BITS).o fault.o ioremap.o kmap.o \ pgtable.o tlbex_$(BITS).o tlbflush_$(BITS).o obj-y += $(mmu-y) diff --git a/arch/sh/mm/gup.c b/arch/sh/mm/gup.c deleted file mode 100644 index 277c882f7489..000000000000 --- a/arch/sh/mm/gup.c +++ /dev/null @@ -1,277 +0,0 @@ -// SPDX-License-Identifier: GPL-2.0 -/* - * Lockless get_user_pages_fast for SuperH - * - * Copyright (C) 2009 - 2010 Paul Mundt - * - * Cloned from the x86 and PowerPC versions, by: - * - * Copyright (C) 2008 Nick Piggin - * Copyright (C) 2008 Novell Inc. - */ -#include -#include -#include -#include -#include - -static inline pte_t gup_get_pte(pte_t *ptep) -{ -#ifndef CONFIG_X2TLB - return READ_ONCE(*ptep); -#else - /* - * With get_user_pages_fast, we walk down the pagetables without - * taking any locks. For this we would like to load the pointers - * atomically, but that is not possible with 64-bit PTEs. What - * we do have is the guarantee that a pte will only either go - * from not present to present, or present to not present or both - * -- it will not switch to a completely different present page - * without a TLB flush in between; something that we are blocking - * by holding interrupts off. - * - * Setting ptes from not present to present goes: - * ptep->pte_high = h; - * smp_wmb(); - * ptep->pte_low = l; - * - * And present to not present goes: - * ptep->pte_low = 0; - * smp_wmb(); - * ptep->pte_high = 0; - * - * We must ensure here that the load of pte_low sees l iff pte_high - * sees h. We load pte_high *after* loading pte_low, which ensures we - * don't see an older value of pte_high. *Then* we recheck pte_low, - * which ensures that we haven't picked up a changed pte high. We might - * have got rubbish values from pte_low and pte_high, but we are - * guaranteed that pte_low will not have the present bit set *unless* - * it is 'l'. And get_user_pages_fast only operates on present ptes, so - * we're safe. - * - * gup_get_pte should not be used or copied outside gup.c without being - * very careful -- it does not atomically load the pte or anything that - * is likely to be useful for you. - */ - pte_t pte; - -retry: - pte.pte_low = ptep->pte_low; - smp_rmb(); - pte.pte_high = ptep->pte_high; - smp_rmb(); - if (unlikely(pte.pte_low != ptep->pte_low)) - goto retry; - - return pte; -#endif -} - -/* - * The performance critical leaf functions are made noinline otherwise gcc - * inlines everything into a single function which results in too much - * register pressure. - */ -static noinline int gup_pte_range(pmd_t pmd, unsigned long addr, - unsigned long end, int write, struct page **pages, int *nr) -{ - u64 mask, result; - pte_t *ptep; - -#ifdef CONFIG_X2TLB - result = _PAGE_PRESENT | _PAGE_EXT(_PAGE_EXT_KERN_READ | _PAGE_EXT_USER_READ); - if (write) - result |= _PAGE_EXT(_PAGE_EXT_KERN_WRITE | _PAGE_EXT_USER_WRITE); -#elif defined(CONFIG_SUPERH64) - result = _PAGE_PRESENT | _PAGE_USER | _PAGE_READ; - if (write) - result |= _PAGE_WRITE; -#else - result = _PAGE_PRESENT | _PAGE_USER; - if (write) - result |= _PAGE_RW; -#endif - - mask = result | _PAGE_SPECIAL; - - ptep = pte_offset_map(&pmd, addr); - do { - pte_t pte = gup_get_pte(ptep); - struct page *page; - - if ((pte_val(pte) & mask) != result) { - pte_unmap(ptep); - return 0; - } - VM_BUG_ON(!pfn_valid(pte_pfn(pte))); - page = pte_page(pte); - get_page(page); - __flush_anon_page(page, addr); - flush_dcache_page(page); - pages[*nr] = page; - (*nr)++; - - } while (ptep++, addr += PAGE_SIZE, addr != end); - pte_unmap(ptep - 1); - - return 1; -} - -static int gup_pmd_range(pud_t pud, unsigned long addr, unsigned long end, - int write, struct page **pages, int *nr) -{ - unsigned long next; - pmd_t *pmdp; - - pmdp = pmd_offset(&pud, addr); - do { - pmd_t pmd = *pmdp; - - next = pmd_addr_end(addr, end); - if (pmd_none(pmd)) - return 0; - if (!gup_pte_range(pmd, addr, next, write, pages, nr)) - return 0; - } while (pmdp++, addr = next, addr != end); - - return 1; -} - -static int gup_pud_range(pgd_t pgd, unsigned long addr, unsigned long end, - int write, struct page **pages, int *nr) -{ - unsigned long next; - pud_t *pudp; - - pudp = pud_offset(&pgd, addr); - do { - pud_t pud = *pudp; - - next = pud_addr_end(addr, end); - if (pud_none(pud)) - return 0; - if (!gup_pmd_range(pud, addr, next, write, pages, nr)) - return 0; - } while (pudp++, addr = next, addr != end); - - return 1; -} - -/* - * Like get_user_pages_fast() except its IRQ-safe in that it won't fall - * back to the regular GUP. - * Note a difference with get_user_pages_fast: this always returns the - * number of pages pinned, 0 if no pages were pinned. - */ -int __get_user_pages_fast(unsigned long start, int nr_pages, int write, - struct page **pages) -{ - struct mm_struct *mm = current->mm; - unsigned long addr, len, end; - unsigned long next; - unsigned long flags; - pgd_t *pgdp; - int nr = 0; - - start &= PAGE_MASK; - addr = start; - len = (unsigned long) nr_pages << PAGE_SHIFT; - end = start + len; - if (unlikely(!access_ok((void __user *)start, len))) - return 0; - - /* - * This doesn't prevent pagetable teardown, but does prevent - * the pagetables and pages from being freed. - */ - local_irq_save(flags); - pgdp = pgd_offset(mm, addr); - do { - pgd_t pgd = *pgdp; - - next = pgd_addr_end(addr, end); - if (pgd_none(pgd)) - break; - if (!gup_pud_range(pgd, addr, next, write, pages, &nr)) - break; - } while (pgdp++, addr = next, addr != end); - local_irq_restore(flags); - - return nr; -} - -/** - * get_user_pages_fast() - pin user pages in memory - * @start: starting user address - * @nr_pages: number of pages from start to pin - * @gup_flags: flags modifying pin behaviour - * @pages: array that receives pointers to the pages pinned. - * Should be at least nr_pages long. - * - * Attempt to pin user pages in memory without taking mm->mmap_sem. - * If not successful, it will fall back to taking the lock and - * calling get_user_pages(). - * - * Returns number of pages pinned. This may be fewer than the number - * requested. If nr_pages is 0 or negative, returns 0. If no pages - * were pinned, returns -errno. - */ -int get_user_pages_fast(unsigned long start, int nr_pages, - unsigned int gup_flags, struct page **pages) -{ - struct mm_struct *mm = current->mm; - unsigned long addr, len, end; - unsigned long next; - pgd_t *pgdp; - int nr = 0; - - start &= PAGE_MASK; - addr = start; - len = (unsigned long) nr_pages << PAGE_SHIFT; - - end = start + len; - if (end < start) - goto slow_irqon; - - local_irq_disable(); - pgdp = pgd_offset(mm, addr); - do { - pgd_t pgd = *pgdp; - - next = pgd_addr_end(addr, end); - if (pgd_none(pgd)) - goto slow; - if (!gup_pud_range(pgd, addr, next, gup_flags & FOLL_WRITE, - pages, &nr)) - goto slow; - } while (pgdp++, addr = next, addr != end); - local_irq_enable(); - - VM_BUG_ON(nr != (end - start) >> PAGE_SHIFT); - return nr; - - { - int ret; - -slow: - local_irq_enable(); -slow_irqon: - /* Try to get the remaining pages with get_user_pages */ - start += nr << PAGE_SHIFT; - pages += nr; - - ret = get_user_pages_unlocked(start, - (end - start) >> PAGE_SHIFT, pages, - gup_flags); - - /* Have to be a bit careful with return values */ - if (nr > 0) { - if (ret < 0) - ret = nr; - else - ret += nr; - } - - return ret; - } -} From patchwork Tue Jun 25 14:37:06 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Christoph Hellwig X-Patchwork-Id: 11015675 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 228C01398 for ; Tue, 25 Jun 2019 14:38:05 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 1295527F4B for ; Tue, 25 Jun 2019 14:38:05 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 06750283EE; Tue, 25 Jun 2019 14:38:05 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-2.7 required=2.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,MAILING_LIST_MULTI,RCVD_IN_DNSWL_NONE autolearn=ham version=3.3.1 Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 9ABBE27F4B for ; Tue, 25 Jun 2019 14:38:04 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 3C00C8E0007; Tue, 25 Jun 2019 10:37:54 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 370608E0002; Tue, 25 Jun 2019 10:37:54 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 1EE168E0007; Tue, 25 Jun 2019 10:37:54 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from mail-pg1-f200.google.com (mail-pg1-f200.google.com [209.85.215.200]) by kanga.kvack.org (Postfix) with ESMTP id C39C18E0002 for ; Tue, 25 Jun 2019 10:37:53 -0400 (EDT) Received: by mail-pg1-f200.google.com with SMTP id w22so2793899pgc.20 for ; Tue, 25 Jun 2019 07:37:53 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:dkim-signature:from:to:cc:subject:date :message-id:in-reply-to:references:mime-version :content-transfer-encoding; bh=aoM0W68wUr2XaaE/a2NJlPCtCZVNUAc490vBn4tMbec=; b=aAYNOqD2M9dBmtKo0aeZu76pCUNHl1Ri0589Uejb1SFE/vdtef/dgAbC79F26xv1m6 B/vyXsci3XHU1Rk8fyWHHyJzC4XyVc6ZE0Frf5u/lKoTuWHlB+6rbbW/eSH7nNezNfVV 98R6eb9LDs4J9gH7arzyMndhkBed05ilsYToDbxeE2uH9ZiYs7LzZAgabPTwYdr6vHM6 UNI4iqBdw39SP4Ai29GpbI+pwzQcx0PiDTND48Ak2T2JpTv+dD7DGIbjR4dqBMQzd9w8 I966dU0UZdDW6cpjIGRRJ+V1KlXhC1Zzw0CoJj2HEDrrzFEIKiQlLpl8pH7owG/I0vOJ 5+fA== X-Gm-Message-State: APjAAAVwbR5bPDFopmXZNyxKu/1EZZLUKKwgMbQLPEC4H3ohtRMGjO/4 mEA1NIJuVBRZG/kHQG6kuMtgYGUYpsaO65EKuIc2oag57UkFXY9rFO+JuVu/i+wy5whX+k8Xv0T OTAaD6CtHBrU70ScDoBFIBf+wWW/6UIK6q+PlPc0MMwYnhOistRmOFya1FJJDI4s= X-Received: by 2002:a63:ec13:: with SMTP id j19mr19293380pgh.174.1561473473427; Tue, 25 Jun 2019 07:37:53 -0700 (PDT) X-Google-Smtp-Source: APXvYqxikwUJ2kjcCOSfkJ5FVQrkpX40PWXPyXvBXkOrJg3F3UwCgZGuyuJ6DISdGouK0WGH3g2h X-Received: by 2002:a63:ec13:: with SMTP id j19mr19293339pgh.174.1561473472729; Tue, 25 Jun 2019 07:37:52 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1561473472; cv=none; d=google.com; s=arc-20160816; b=GXYB4IlwoDUcwcEPRT2LnfBJjOHUjwXv+J0nZhEI+qJfJBDJMssJSuRdIYOOaRzODj hq9vFSCQxx/DrkMDX2htdLhKAQCUHagRTevMzMmKiN/WqB7RbCj0TBy7zu0M9FRk0tWz 8NP7+gtfcBdh6qOZa19ZRWs7iuCjqolcmT9XGAllmuaxaNKBmghMHTlm2p1wHE2GOmg2 WTLqoZPNX2XzcbH1FfxnCPtwtySnukeqrMVg8gCjEAHX0s8POK1Az23R8jYHDHyl855y fxVbXtl9NUwSQ4CwlKrLNJ+DR/Zekr1J+HHQofXaAgS8fnUCkUtNeOX7+/DNP3pmt1Tw BC2A== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:dkim-signature; bh=aoM0W68wUr2XaaE/a2NJlPCtCZVNUAc490vBn4tMbec=; b=VlcwwQ2Fq26UhoWJaCj1ps8WM1nEs9FC2mqIY/0vslgasyD1R5ZuxW2B1SYKB8kjCA ab/v+SKKCnzCvHaoYObmdXtNFXViXyhjIEXf2M58eyUPKHmZ0PCpF7I9rQ8WvCwxxd04 6h89TPG7iau2RYQrtfP5FbFBgNDo/Uemfjh1Mnq5Px9+TBcqilt9also56wei4f4Dq7U FhA5al58tgR+EA7iGUATRVSbuboMe+xo1LvnVTdD14z3RbVaQJeApip2o6/chtFfd5bO ZEHzEDAM38l+bPf9M3VibKm2VLOQv0zPIdszAVvJhV1bsjsnSE7CMsmNCfpyNuSvSM+q QuPw== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@infradead.org header.s=bombadil.20170209 header.b=taUea8cm; spf=pass (google.com: best guess record for domain of batv+c5155a46dc30cc8634d8+5784+infradead.org+hch@bombadil.srs.infradead.org designates 2607:7c80:54:e::133 as permitted sender) smtp.mailfrom=BATV+c5155a46dc30cc8634d8+5784+infradead.org+hch@bombadil.srs.infradead.org Received: from bombadil.infradead.org (bombadil.infradead.org. [2607:7c80:54:e::133]) by mx.google.com with ESMTPS id h194si15161530pfe.214.2019.06.25.07.37.52 for (version=TLS1_3 cipher=AEAD-AES256-GCM-SHA384 bits=256/256); Tue, 25 Jun 2019 07:37:52 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of batv+c5155a46dc30cc8634d8+5784+infradead.org+hch@bombadil.srs.infradead.org designates 2607:7c80:54:e::133 as permitted sender) client-ip=2607:7c80:54:e::133; Authentication-Results: mx.google.com; dkim=pass header.i=@infradead.org header.s=bombadil.20170209 header.b=taUea8cm; spf=pass (google.com: best guess record for domain of batv+c5155a46dc30cc8634d8+5784+infradead.org+hch@bombadil.srs.infradead.org designates 2607:7c80:54:e::133 as permitted sender) smtp.mailfrom=BATV+c5155a46dc30cc8634d8+5784+infradead.org+hch@bombadil.srs.infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=bombadil.20170209; h=Content-Transfer-Encoding: MIME-Version:References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender :Reply-To:Content-Type:Content-ID:Content-Description:Resent-Date:Resent-From :Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Id:List-Help: List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive; bh=aoM0W68wUr2XaaE/a2NJlPCtCZVNUAc490vBn4tMbec=; b=taUea8cmHqyNdvhD2wgVGB0dn9 UNjhchjKAVybII1/xC+BKN2QzvKFCFr65RLRIrYQ5h4VNHBnZrtGlvGb8ysL1RNjg0DIGXH3JJr1I wMmt0t69KSievW3CsBhwGLOMqCpmB619xq3/7dNBm8QoSK245z64i8ktRJFvsX6oiaUYgokit7Ev3 D09QUigKWtdPTnPr1NAYcmON9yQk7kWkgyVI2wQQX4iGZhTHyA2OoMmC6neGyVEt0qA+n2vzKFob3 lwTirQjakiJ+2XeS0olA5OJaEgxmx40KulfZoE0tJi+IazqucJGMT7t3wUDhq1sUX+wQ+1FStiMvB g1sc0hpA==; Received: from 213-225-6-159.nat.highway.a1.net ([213.225.6.159] helo=localhost) by bombadil.infradead.org with esmtpsa (Exim 4.92 #3 (Red Hat Linux)) id 1hfmZo-0007zf-3H; Tue, 25 Jun 2019 14:37:40 +0000 From: Christoph Hellwig To: Andrew Morton , Linus Torvalds , Paul Burton , James Hogan , Yoshinori Sato , Rich Felker , "David S. Miller" Cc: Nicholas Piggin , Khalid Aziz , Andrey Konovalov , Benjamin Herrenschmidt , Paul Mackerras , Michael Ellerman , linux-mips@vger.kernel.org, linux-sh@vger.kernel.org, sparclinux@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, linux-mm@kvack.org, x86@kernel.org, linux-kernel@vger.kernel.org Subject: [PATCH 07/16] sparc64: add the missing pgd_page definition Date: Tue, 25 Jun 2019 16:37:06 +0200 Message-Id: <20190625143715.1689-8-hch@lst.de> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20190625143715.1689-1-hch@lst.de> References: <20190625143715.1689-1-hch@lst.de> MIME-Version: 1.0 X-SRS-Rewrite: SMTP reverse-path rewritten from by bombadil.infradead.org. See http://www.infradead.org/rpr.html X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: X-Virus-Scanned: ClamAV using ClamSMTP sparc64 only had pgd_page_vaddr, but not pgd_page. Signed-off-by: Christoph Hellwig --- arch/sparc/include/asm/pgtable_64.h | 1 + 1 file changed, 1 insertion(+) diff --git a/arch/sparc/include/asm/pgtable_64.h b/arch/sparc/include/asm/pgtable_64.h index 22500c3be7a9..f0dcf991d27f 100644 --- a/arch/sparc/include/asm/pgtable_64.h +++ b/arch/sparc/include/asm/pgtable_64.h @@ -861,6 +861,7 @@ static inline unsigned long pud_page_vaddr(pud_t pud) #define pud_clear(pudp) (pud_val(*(pudp)) = 0UL) #define pgd_page_vaddr(pgd) \ ((unsigned long) __va(pgd_val(pgd))) +#define pgd_page(pgd) pfn_to_page(pgd_pfn(pgd)) #define pgd_present(pgd) (pgd_val(pgd) != 0U) #define pgd_clear(pgdp) (pgd_val(*(pgdp)) = 0UL) From patchwork Tue Jun 25 14:37:07 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Christoph Hellwig X-Patchwork-Id: 11015679 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 44141112C for ; Tue, 25 Jun 2019 14:38:11 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 34824283EE for ; Tue, 25 Jun 2019 14:38:11 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 2889128640; Tue, 25 Jun 2019 14:38:11 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-2.7 required=2.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,MAILING_LIST_MULTI,RCVD_IN_DNSWL_NONE autolearn=ham version=3.3.1 Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id A7782283EE for ; Tue, 25 Jun 2019 14:38:10 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 288508E000A; Tue, 25 Jun 2019 10:38:00 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 1C1008E0009; Tue, 25 Jun 2019 10:38:00 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 03C328E000A; Tue, 25 Jun 2019 10:37:59 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from mail-pg1-f198.google.com (mail-pg1-f198.google.com [209.85.215.198]) by kanga.kvack.org (Postfix) with ESMTP id B9A6A8E0002 for ; Tue, 25 Jun 2019 10:37:59 -0400 (EDT) Received: by mail-pg1-f198.google.com with SMTP id e16so11783025pga.4 for ; Tue, 25 Jun 2019 07:37:59 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:dkim-signature:from:to:cc:subject:date :message-id:in-reply-to:references:mime-version :content-transfer-encoding; bh=Uev9P+r50vVpCL7nS/dekiTRo7rXCigRkpG+pJrAUCA=; b=edAQLwnUKstZO1SOOg0pPLakg1mxOICzDJ3Uh7AmV4zdFRK9Q/yli7wGYkJoK34fdY NHDMdrIILTsmnsa1X/H7hVPMNHgfn9L8GxcwdS0dedI4l0Lq4i4PBOPGLOEddrGFCuqy VQ75qOFFaQb9+n3ZaTiua5mhdx0rNr77N0P7UnK1HzjY0x6ql5wbpnhGMa9MgBphpKne QW4W3O/qcOBi4vAa6UKGsz0b4TKnVFAY906ru306K4jbqQ9v7f40205NmPj8U5be6170 fz7OpLKRreXO64cBdcqW1lDQIcvutL2jEjs+49qvsFm7ko9WSCXGoVgL3KMbhIYdVI/7 CY4A== X-Gm-Message-State: APjAAAWAkAB3lpo1bRtTVtwT11J6F7Xo986+mTJcd/DG5nnxpfSrA5e3 OiwPyJdxdRX7Px6fsjv+vE7R9cVfELe/XGnt7DH312hP/G/J/Cvbolvtr34cX7uf/tEQOQk79zo 62FY+wcU6HA5/1mhp+g/OxYdIKUyKOl11ATTIc2X6JBbE2CNgyz6GjyXdWce8OLM= X-Received: by 2002:a17:90a:898e:: with SMTP id v14mr32371781pjn.119.1561473479440; Tue, 25 Jun 2019 07:37:59 -0700 (PDT) X-Google-Smtp-Source: APXvYqwbQKTBn1lXgHvkFnjgwiLHqGABWR3XLIKs3YXO6ZuyBR+f/ZxiVJZgyamOBY3+6AGsoOg6 X-Received: by 2002:a17:90a:898e:: with SMTP id v14mr32371717pjn.119.1561473478711; Tue, 25 Jun 2019 07:37:58 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1561473478; cv=none; d=google.com; s=arc-20160816; b=I55PH4tJ6wGMPIJWzxnTw3r5mYaTjLTZ84N2Z8v+9vSpO53hdaJMz7+vBXC5zgBPmv 3e/1JglH0H/v3PuR0HzKsZoGHJ3BvpVzxDqFNiiEge4K8caprCfvXOLkPnk1tu15TViE EvgNgNLlbZ8ZSaqr672ei9foiMG/7gxxBHVTSYTNvpQEjY+TeNSb6xJTKdWTSIRKrkOU 79doC/FJgCKZQ+keA1GtGQLmPAqVUlfdpKoLEMSod08UDfzRnrFeZxjVYfo/o1u2QNVk /WIcFUbOrk3B8uRjJgUcjXJaWFHubJXYfebl79vfErdjM4TppvSDcZ/PanvwwWYXJTpV DvAw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:dkim-signature; bh=Uev9P+r50vVpCL7nS/dekiTRo7rXCigRkpG+pJrAUCA=; b=oitAY4BcVZ3kFgN6e9zRn231nxgnha5xAGS+p7653K7MLFikn+w4Ye04EemEuVy6gC DJOad+FYTSowl3MtL/U3riHiqL9yRSPvKg7AcbnnKdgsdJZY9tScAaSCSUplBs4IWID4 IGMWwnfnVTit6jmbGUGIFZFizIWrNh2S065iDKBCaSwYx+svW9AVstqB/YsBKUjTCsr1 T4jbu1tlvBtNBhS30iOInK6ULvgiQzFnmV/IxSX2+yFUVMbHfYwekCIoHvJiPklO20YL fgg5pBTo1846+NdMAriitdlAIMzXc91QO/anHtKsC5H3Reg1HpaSDCmQzVk56RXlPR9C WeDQ== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@infradead.org header.s=bombadil.20170209 header.b=E9+L7K4k; spf=pass (google.com: best guess record for domain of batv+c5155a46dc30cc8634d8+5784+infradead.org+hch@bombadil.srs.infradead.org designates 2607:7c80:54:e::133 as permitted sender) smtp.mailfrom=BATV+c5155a46dc30cc8634d8+5784+infradead.org+hch@bombadil.srs.infradead.org Received: from bombadil.infradead.org (bombadil.infradead.org. [2607:7c80:54:e::133]) by mx.google.com with ESMTPS id l64si2922088pjb.93.2019.06.25.07.37.58 for (version=TLS1_3 cipher=AEAD-AES256-GCM-SHA384 bits=256/256); Tue, 25 Jun 2019 07:37:58 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of batv+c5155a46dc30cc8634d8+5784+infradead.org+hch@bombadil.srs.infradead.org designates 2607:7c80:54:e::133 as permitted sender) client-ip=2607:7c80:54:e::133; Authentication-Results: mx.google.com; dkim=pass header.i=@infradead.org header.s=bombadil.20170209 header.b=E9+L7K4k; spf=pass (google.com: best guess record for domain of batv+c5155a46dc30cc8634d8+5784+infradead.org+hch@bombadil.srs.infradead.org designates 2607:7c80:54:e::133 as permitted sender) smtp.mailfrom=BATV+c5155a46dc30cc8634d8+5784+infradead.org+hch@bombadil.srs.infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=bombadil.20170209; h=Content-Transfer-Encoding: MIME-Version:References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender :Reply-To:Content-Type:Content-ID:Content-Description:Resent-Date:Resent-From :Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Id:List-Help: List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive; bh=Uev9P+r50vVpCL7nS/dekiTRo7rXCigRkpG+pJrAUCA=; b=E9+L7K4kE6JTbh7vmfjLu++Bdq JFW5Va2G2EXQmw9Ke1E0G6TXfr3GramuNSqnofY+5ruiQC7YMVyKckCqdLDCAejhVvPjjrzmC3s0K 5jHAOt+PTN56hNY/emDzcT8mibDyCDoxaAOkaMPNE1QZ+kHmuyCglsjWWDNoZG889qMzTCVo7xhQC zZPu+39bRGe7M4f49CxIPbGV6KNIgucjuElJ9zwUyvgo2x9WKT9ZGfcD3RhFXEL9DhOFYfLgYbf3h mDT0YYG8DgGySjHXpThwmT0xYCmOYqlSMO4BtE80cVWg8pT6o1QKaSG6Q9B6vUTdYj3GyxqYcvc6i IGm4eLNQ==; Received: from 213-225-6-159.nat.highway.a1.net ([213.225.6.159] helo=localhost) by bombadil.infradead.org with esmtpsa (Exim 4.92 #3 (Red Hat Linux)) id 1hfmZr-00080F-Ip; Tue, 25 Jun 2019 14:37:44 +0000 From: Christoph Hellwig To: Andrew Morton , Linus Torvalds , Paul Burton , James Hogan , Yoshinori Sato , Rich Felker , "David S. Miller" Cc: Nicholas Piggin , Khalid Aziz , Andrey Konovalov , Benjamin Herrenschmidt , Paul Mackerras , Michael Ellerman , linux-mips@vger.kernel.org, linux-sh@vger.kernel.org, sparclinux@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, linux-mm@kvack.org, x86@kernel.org, linux-kernel@vger.kernel.org Subject: [PATCH 08/16] sparc64: define untagged_addr() Date: Tue, 25 Jun 2019 16:37:07 +0200 Message-Id: <20190625143715.1689-9-hch@lst.de> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20190625143715.1689-1-hch@lst.de> References: <20190625143715.1689-1-hch@lst.de> MIME-Version: 1.0 X-SRS-Rewrite: SMTP reverse-path rewritten from by bombadil.infradead.org. See http://www.infradead.org/rpr.html X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: X-Virus-Scanned: ClamAV using ClamSMTP Add a helper to untag a user pointer. This is needed for ADI support in get_user_pages_fast. Signed-off-by: Christoph Hellwig Reviewed-by: Khalid Aziz --- arch/sparc/include/asm/pgtable_64.h | 22 ++++++++++++++++++++++ 1 file changed, 22 insertions(+) diff --git a/arch/sparc/include/asm/pgtable_64.h b/arch/sparc/include/asm/pgtable_64.h index f0dcf991d27f..1904782dcd39 100644 --- a/arch/sparc/include/asm/pgtable_64.h +++ b/arch/sparc/include/asm/pgtable_64.h @@ -1076,6 +1076,28 @@ static inline int io_remap_pfn_range(struct vm_area_struct *vma, } #define io_remap_pfn_range io_remap_pfn_range +static inline unsigned long untagged_addr(unsigned long start) +{ + if (adi_capable()) { + long addr = start; + + /* If userspace has passed a versioned address, kernel + * will not find it in the VMAs since it does not store + * the version tags in the list of VMAs. Storing version + * tags in list of VMAs is impractical since they can be + * changed any time from userspace without dropping into + * kernel. Any address search in VMAs will be done with + * non-versioned addresses. Ensure the ADI version bits + * are dropped here by sign extending the last bit before + * ADI bits. IOMMU does not implement version tags. + */ + return (addr << (long)adi_nbits()) >> (long)adi_nbits(); + } + + return start; +} +#define untagged_addr untagged_addr + #include #include From patchwork Tue Jun 25 14:37:08 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Christoph Hellwig X-Patchwork-Id: 11015687 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 29A31112C for ; Tue, 25 Jun 2019 14:38:19 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 1BB3527F97 for ; Tue, 25 Jun 2019 14:38:19 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 0F5822863C; Tue, 25 Jun 2019 14:38:19 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-2.7 required=2.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,MAILING_LIST_MULTI,RCVD_IN_DNSWL_NONE autolearn=unavailable version=3.3.1 Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 2D6A427F97 for ; Tue, 25 Jun 2019 14:38:18 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id BE45C8E000C; Tue, 25 Jun 2019 10:38:04 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id B372A8E000B; Tue, 25 Jun 2019 10:38:04 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 7B3728E000C; Tue, 25 Jun 2019 10:38:04 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from mail-pg1-f200.google.com (mail-pg1-f200.google.com [209.85.215.200]) by kanga.kvack.org (Postfix) with ESMTP id 3FFEF8E0009 for ; Tue, 25 Jun 2019 10:38:04 -0400 (EDT) Received: by mail-pg1-f200.google.com with SMTP id w22so2794156pgc.20 for ; Tue, 25 Jun 2019 07:38:04 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:dkim-signature:from:to:cc:subject:date :message-id:in-reply-to:references:mime-version :content-transfer-encoding; bh=4KPE6CsNMoZOfy+nrZ2rpLzplDkTMTzqNqNxkX6IqeY=; b=dsWWLppAxuKo+3BlaHbfgRMBt46VRibGXmvokkuKLCLFpLxMXyr3x3Vc2yKPox0gNj AnPTpGG0bra0LcgJLOZWiGX/9kQMc8LVVxkpC7ApHWTrYZP0pbdBA9GNUAefBfEEFsnD 5CNdbopYjU7KJwmbOGizZ9UD/5eWGeRBv+alsDba9VWV2L2weBtWCy7S95zQYUZRf3im EoTD0LXkiwNn4+tYfKpfEWlYJfw5XxP7pXFccK+jzHOdi8eplqMu/rbC+6n1R7OgATD1 2LaUGgvGjndocpPx9o41AnaYRTJ+VOTt92UtIcmeImTloGxWHioFIfdRxdmVZRLHxVuH moww== X-Gm-Message-State: APjAAAXUFRHMrPH71IQgwt592Biyu9A1tPl1M7eQJdcLCdQTvZn5dU0U gYUDgZyLUlRfs9zXu5jZQNLsSzu+WTBZuCgxOIrGg63dQJi6Rour/G7f1j9J3SJ1yMMOHvkSxb8 iGsb2nMDqBKHM8J1lb4W2lWHxCHdGQdlNU60Nw2HsJR2ds0RccXZ/vPqvW2mmgXQ= X-Received: by 2002:a63:1a5e:: with SMTP id a30mr38261021pgm.433.1561473483805; Tue, 25 Jun 2019 07:38:03 -0700 (PDT) X-Google-Smtp-Source: APXvYqyUTBf1Dxrhtyi4Wn/oA54witRRIo1p01W0J4xm6aQU22CJPpRExEILm0eEZ/hFY1blWfGa X-Received: by 2002:a63:1a5e:: with SMTP id a30mr38260941pgm.433.1561473482525; Tue, 25 Jun 2019 07:38:02 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1561473482; cv=none; d=google.com; s=arc-20160816; b=A1M/CYLlPCn1v4+/j9DQ0mtsnfVHDuu2spwJGfRnEukdaN/8ZeBzL++z0qpE6jLidS 7giFSrATY1yQ4p9sWaMAyPMkx87oJq3XbSj7tfjGxbWJ8ij8pTlUvS1bDWFRIEaGGF6k G8hLrtGHNWzTFAnyreNYU4sz/zDc3rq1aQtuuYPu4hZrvvd9fEaVwAqsD/KYGjEwkGXt ST59j2V16W/e2qN4vnlHsJLOvLZVPl5lMYqrPnQuuT0OhXZuzHiSkeA8+p4RfmkZlCaS peYqKqFSk2G+empWxlcW8v4VT51OCvLqvBwDvCBj82SA40q3QE8HrMFlbtQimCCDk9AK ILGQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:dkim-signature; bh=4KPE6CsNMoZOfy+nrZ2rpLzplDkTMTzqNqNxkX6IqeY=; b=nAAZy6sWtmWwq//eBTuRtcuPBdKt4trPhonfdvI4n3giGF69n0eG3YVKvqAglY/44V e1k+khpkV0ApbbTOZD5P4hSQ/dhtW+VA9jbp5ZMASFd1LQ/rrgi8yhb/pCb5SR3qupS1 k7ngxGWes5GP95JCVyHxy1oWPG6C283XzOTs5ooB+9oJvCo0k+S3nCyoRoRQ0Rl4ygxr 106XBagW4e5O4ZfGeJRwNBRlwI3+SO3s1IvMlGaIxc9lOSnA0oDWdkBXZhtQDy0RMVjk nasxEsrr1yEeLHU1MjiaFT0+NrVf7I/DfV8FcG6ZUgghNWIinLC/KRxh1jVAI4eGQXk6 +/qA== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@infradead.org header.s=bombadil.20170209 header.b="SY5u8/KE"; spf=pass (google.com: best guess record for domain of batv+c5155a46dc30cc8634d8+5784+infradead.org+hch@bombadil.srs.infradead.org designates 2607:7c80:54:e::133 as permitted sender) smtp.mailfrom=BATV+c5155a46dc30cc8634d8+5784+infradead.org+hch@bombadil.srs.infradead.org Received: from bombadil.infradead.org (bombadil.infradead.org. [2607:7c80:54:e::133]) by mx.google.com with ESMTPS id n8si2830340pfa.223.2019.06.25.07.38.02 for (version=TLS1_3 cipher=AEAD-AES256-GCM-SHA384 bits=256/256); Tue, 25 Jun 2019 07:38:02 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of batv+c5155a46dc30cc8634d8+5784+infradead.org+hch@bombadil.srs.infradead.org designates 2607:7c80:54:e::133 as permitted sender) client-ip=2607:7c80:54:e::133; Authentication-Results: mx.google.com; dkim=pass header.i=@infradead.org header.s=bombadil.20170209 header.b="SY5u8/KE"; spf=pass (google.com: best guess record for domain of batv+c5155a46dc30cc8634d8+5784+infradead.org+hch@bombadil.srs.infradead.org designates 2607:7c80:54:e::133 as permitted sender) smtp.mailfrom=BATV+c5155a46dc30cc8634d8+5784+infradead.org+hch@bombadil.srs.infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=bombadil.20170209; h=Content-Transfer-Encoding: MIME-Version:References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender :Reply-To:Content-Type:Content-ID:Content-Description:Resent-Date:Resent-From :Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Id:List-Help: List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive; bh=4KPE6CsNMoZOfy+nrZ2rpLzplDkTMTzqNqNxkX6IqeY=; b=SY5u8/KEOcSaGAxBr8rD1KvWvr s/9lmkVyVeUxe0fYK4ULXh4cCHAFwwFg9GGQZTqINOls9Idsgns06d/W6OfiCuJbmDKZmx1UxHe+3 BGp8CG/C5/Wz10+fUYoiMuwhZ+jEA9LzctPJKB6APtIILDApzjvneH2XLjt0Gb/01jR2yu1mu2luM MMe8Wlouwk0uyid9P1ufEUufHq2y55QNwzaM3qvIaAS3dqwa0ZFXWFZCA8n+2wMAVduAgjOTrey0I lvC98FgNCZGE61ht0JHlh0fDmKl16WJgD2QKiDrMQjA6XCt58AXbzGQLEk7pozrwrIbP/BEh6Vj9p RRtI+E3Q==; Received: from 213-225-6-159.nat.highway.a1.net ([213.225.6.159] helo=localhost) by bombadil.infradead.org with esmtpsa (Exim 4.92 #3 (Red Hat Linux)) id 1hfmZu-00080r-Ot; Tue, 25 Jun 2019 14:37:47 +0000 From: Christoph Hellwig To: Andrew Morton , Linus Torvalds , Paul Burton , James Hogan , Yoshinori Sato , Rich Felker , "David S. Miller" Cc: Nicholas Piggin , Khalid Aziz , Andrey Konovalov , Benjamin Herrenschmidt , Paul Mackerras , Michael Ellerman , linux-mips@vger.kernel.org, linux-sh@vger.kernel.org, sparclinux@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, linux-mm@kvack.org, x86@kernel.org, linux-kernel@vger.kernel.org Subject: [PATCH 09/16] sparc64: use the generic get_user_pages_fast code Date: Tue, 25 Jun 2019 16:37:08 +0200 Message-Id: <20190625143715.1689-10-hch@lst.de> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20190625143715.1689-1-hch@lst.de> References: <20190625143715.1689-1-hch@lst.de> MIME-Version: 1.0 X-SRS-Rewrite: SMTP reverse-path rewritten from by bombadil.infradead.org. See http://www.infradead.org/rpr.html X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: X-Virus-Scanned: ClamAV using ClamSMTP The sparc64 code is mostly equivalent to the generic one, minus various bugfixes and two arch overrides that this patch adds to pgtable.h. Signed-off-by: Christoph Hellwig Reviewed-by: Khalid Aziz --- arch/sparc/Kconfig | 1 + arch/sparc/include/asm/pgtable_64.h | 18 ++ arch/sparc/mm/Makefile | 2 +- arch/sparc/mm/gup.c | 340 ---------------------------- 4 files changed, 20 insertions(+), 341 deletions(-) delete mode 100644 arch/sparc/mm/gup.c diff --git a/arch/sparc/Kconfig b/arch/sparc/Kconfig index 26ab6f5bbaaf..22435471f942 100644 --- a/arch/sparc/Kconfig +++ b/arch/sparc/Kconfig @@ -28,6 +28,7 @@ config SPARC select RTC_DRV_M48T59 select RTC_SYSTOHC select HAVE_ARCH_JUMP_LABEL if SPARC64 + select HAVE_GENERIC_GUP if SPARC64 select GENERIC_IRQ_SHOW select ARCH_WANT_IPC_PARSE_VERSION select GENERIC_PCI_IOMAP diff --git a/arch/sparc/include/asm/pgtable_64.h b/arch/sparc/include/asm/pgtable_64.h index 1904782dcd39..547ff96fb228 100644 --- a/arch/sparc/include/asm/pgtable_64.h +++ b/arch/sparc/include/asm/pgtable_64.h @@ -1098,6 +1098,24 @@ static inline unsigned long untagged_addr(unsigned long start) } #define untagged_addr untagged_addr +static inline bool pte_access_permitted(pte_t pte, bool write) +{ + u64 prot; + + if (tlb_type == hypervisor) { + prot = _PAGE_PRESENT_4V | _PAGE_P_4V; + if (write) + prot |= _PAGE_WRITE_4V; + } else { + prot = _PAGE_PRESENT_4U | _PAGE_P_4U; + if (write) + prot |= _PAGE_WRITE_4U; + } + + return (pte_val(pte) & (prot | _PAGE_SPECIAL)) == prot; +} +#define pte_access_permitted pte_access_permitted + #include #include diff --git a/arch/sparc/mm/Makefile b/arch/sparc/mm/Makefile index d39075b1e3b7..b078205b70e0 100644 --- a/arch/sparc/mm/Makefile +++ b/arch/sparc/mm/Makefile @@ -5,7 +5,7 @@ asflags-y := -ansi ccflags-y := -Werror -obj-$(CONFIG_SPARC64) += ultra.o tlb.o tsb.o gup.o +obj-$(CONFIG_SPARC64) += ultra.o tlb.o tsb.o obj-y += fault_$(BITS).o obj-y += init_$(BITS).o obj-$(CONFIG_SPARC32) += extable.o srmmu.o iommu.o io-unit.o diff --git a/arch/sparc/mm/gup.c b/arch/sparc/mm/gup.c deleted file mode 100644 index 1e770a517d4a..000000000000 --- a/arch/sparc/mm/gup.c +++ /dev/null @@ -1,340 +0,0 @@ -// SPDX-License-Identifier: GPL-2.0 -/* - * Lockless get_user_pages_fast for sparc, cribbed from powerpc - * - * Copyright (C) 2008 Nick Piggin - * Copyright (C) 2008 Novell Inc. - */ - -#include -#include -#include -#include -#include -#include -#include - -/* - * The performance critical leaf functions are made noinline otherwise gcc - * inlines everything into a single function which results in too much - * register pressure. - */ -static noinline int gup_pte_range(pmd_t pmd, unsigned long addr, - unsigned long end, int write, struct page **pages, int *nr) -{ - unsigned long mask, result; - pte_t *ptep; - - if (tlb_type == hypervisor) { - result = _PAGE_PRESENT_4V|_PAGE_P_4V; - if (write) - result |= _PAGE_WRITE_4V; - } else { - result = _PAGE_PRESENT_4U|_PAGE_P_4U; - if (write) - result |= _PAGE_WRITE_4U; - } - mask = result | _PAGE_SPECIAL; - - ptep = pte_offset_kernel(&pmd, addr); - do { - struct page *page, *head; - pte_t pte = *ptep; - - if ((pte_val(pte) & mask) != result) - return 0; - VM_BUG_ON(!pfn_valid(pte_pfn(pte))); - - /* The hugepage case is simplified on sparc64 because - * we encode the sub-page pfn offsets into the - * hugepage PTEs. We could optimize this in the future - * use page_cache_add_speculative() for the hugepage case. - */ - page = pte_page(pte); - head = compound_head(page); - if (!page_cache_get_speculative(head)) - return 0; - if (unlikely(pte_val(pte) != pte_val(*ptep))) { - put_page(head); - return 0; - } - - pages[*nr] = page; - (*nr)++; - } while (ptep++, addr += PAGE_SIZE, addr != end); - - return 1; -} - -static int gup_huge_pmd(pmd_t *pmdp, pmd_t pmd, unsigned long addr, - unsigned long end, int write, struct page **pages, - int *nr) -{ - struct page *head, *page; - int refs; - - if (!(pmd_val(pmd) & _PAGE_VALID)) - return 0; - - if (write && !pmd_write(pmd)) - return 0; - - refs = 0; - page = pmd_page(pmd) + ((addr & ~PMD_MASK) >> PAGE_SHIFT); - head = compound_head(page); - do { - VM_BUG_ON(compound_head(page) != head); - pages[*nr] = page; - (*nr)++; - page++; - refs++; - } while (addr += PAGE_SIZE, addr != end); - - if (!page_cache_add_speculative(head, refs)) { - *nr -= refs; - return 0; - } - - if (unlikely(pmd_val(pmd) != pmd_val(*pmdp))) { - *nr -= refs; - while (refs--) - put_page(head); - return 0; - } - - return 1; -} - -static int gup_huge_pud(pud_t *pudp, pud_t pud, unsigned long addr, - unsigned long end, int write, struct page **pages, - int *nr) -{ - struct page *head, *page; - int refs; - - if (!(pud_val(pud) & _PAGE_VALID)) - return 0; - - if (write && !pud_write(pud)) - return 0; - - refs = 0; - page = pud_page(pud) + ((addr & ~PUD_MASK) >> PAGE_SHIFT); - head = compound_head(page); - do { - VM_BUG_ON(compound_head(page) != head); - pages[*nr] = page; - (*nr)++; - page++; - refs++; - } while (addr += PAGE_SIZE, addr != end); - - if (!page_cache_add_speculative(head, refs)) { - *nr -= refs; - return 0; - } - - if (unlikely(pud_val(pud) != pud_val(*pudp))) { - *nr -= refs; - while (refs--) - put_page(head); - return 0; - } - - return 1; -} - -static int gup_pmd_range(pud_t pud, unsigned long addr, unsigned long end, - int write, struct page **pages, int *nr) -{ - unsigned long next; - pmd_t *pmdp; - - pmdp = pmd_offset(&pud, addr); - do { - pmd_t pmd = *pmdp; - - next = pmd_addr_end(addr, end); - if (pmd_none(pmd)) - return 0; - if (unlikely(pmd_large(pmd))) { - if (!gup_huge_pmd(pmdp, pmd, addr, next, - write, pages, nr)) - return 0; - } else if (!gup_pte_range(pmd, addr, next, write, - pages, nr)) - return 0; - } while (pmdp++, addr = next, addr != end); - - return 1; -} - -static int gup_pud_range(pgd_t pgd, unsigned long addr, unsigned long end, - int write, struct page **pages, int *nr) -{ - unsigned long next; - pud_t *pudp; - - pudp = pud_offset(&pgd, addr); - do { - pud_t pud = *pudp; - - next = pud_addr_end(addr, end); - if (pud_none(pud)) - return 0; - if (unlikely(pud_large(pud))) { - if (!gup_huge_pud(pudp, pud, addr, next, - write, pages, nr)) - return 0; - } else if (!gup_pmd_range(pud, addr, next, write, pages, nr)) - return 0; - } while (pudp++, addr = next, addr != end); - - return 1; -} - -/* - * Note a difference with get_user_pages_fast: this always returns the - * number of pages pinned, 0 if no pages were pinned. - */ -int __get_user_pages_fast(unsigned long start, int nr_pages, int write, - struct page **pages) -{ - struct mm_struct *mm = current->mm; - unsigned long addr, len, end; - unsigned long next, flags; - pgd_t *pgdp; - int nr = 0; - -#ifdef CONFIG_SPARC64 - if (adi_capable()) { - long addr = start; - - /* If userspace has passed a versioned address, kernel - * will not find it in the VMAs since it does not store - * the version tags in the list of VMAs. Storing version - * tags in list of VMAs is impractical since they can be - * changed any time from userspace without dropping into - * kernel. Any address search in VMAs will be done with - * non-versioned addresses. Ensure the ADI version bits - * are dropped here by sign extending the last bit before - * ADI bits. IOMMU does not implement version tags. - */ - addr = (addr << (long)adi_nbits()) >> (long)adi_nbits(); - start = addr; - } -#endif - start &= PAGE_MASK; - addr = start; - len = (unsigned long) nr_pages << PAGE_SHIFT; - end = start + len; - - local_irq_save(flags); - pgdp = pgd_offset(mm, addr); - do { - pgd_t pgd = *pgdp; - - next = pgd_addr_end(addr, end); - if (pgd_none(pgd)) - break; - if (!gup_pud_range(pgd, addr, next, write, pages, &nr)) - break; - } while (pgdp++, addr = next, addr != end); - local_irq_restore(flags); - - return nr; -} - -int get_user_pages_fast(unsigned long start, int nr_pages, - unsigned int gup_flags, struct page **pages) -{ - struct mm_struct *mm = current->mm; - unsigned long addr, len, end; - unsigned long next; - pgd_t *pgdp; - int nr = 0; - -#ifdef CONFIG_SPARC64 - if (adi_capable()) { - long addr = start; - - /* If userspace has passed a versioned address, kernel - * will not find it in the VMAs since it does not store - * the version tags in the list of VMAs. Storing version - * tags in list of VMAs is impractical since they can be - * changed any time from userspace without dropping into - * kernel. Any address search in VMAs will be done with - * non-versioned addresses. Ensure the ADI version bits - * are dropped here by sign extending the last bit before - * ADI bits. IOMMU does not implements version tags, - */ - addr = (addr << (long)adi_nbits()) >> (long)adi_nbits(); - start = addr; - } -#endif - start &= PAGE_MASK; - addr = start; - len = (unsigned long) nr_pages << PAGE_SHIFT; - end = start + len; - - /* - * XXX: batch / limit 'nr', to avoid large irq off latency - * needs some instrumenting to determine the common sizes used by - * important workloads (eg. DB2), and whether limiting the batch size - * will decrease performance. - * - * It seems like we're in the clear for the moment. Direct-IO is - * the main guy that batches up lots of get_user_pages, and even - * they are limited to 64-at-a-time which is not so many. - */ - /* - * This doesn't prevent pagetable teardown, but does prevent - * the pagetables from being freed on sparc. - * - * So long as we atomically load page table pointers versus teardown, - * we can follow the address down to the the page and take a ref on it. - */ - local_irq_disable(); - - pgdp = pgd_offset(mm, addr); - do { - pgd_t pgd = *pgdp; - - next = pgd_addr_end(addr, end); - if (pgd_none(pgd)) - goto slow; - if (!gup_pud_range(pgd, addr, next, gup_flags & FOLL_WRITE, - pages, &nr)) - goto slow; - } while (pgdp++, addr = next, addr != end); - - local_irq_enable(); - - VM_BUG_ON(nr != (end - start) >> PAGE_SHIFT); - return nr; - - { - int ret; - -slow: - local_irq_enable(); - - /* Try to get the remaining pages with get_user_pages */ - start += nr << PAGE_SHIFT; - pages += nr; - - ret = get_user_pages_unlocked(start, - (end - start) >> PAGE_SHIFT, pages, - gup_flags); - - /* Have to be a bit careful with return values */ - if (nr > 0) { - if (ret < 0) - ret = nr; - else - ret += nr; - } - - return ret; - } -} From patchwork Tue Jun 25 14:37:09 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Christoph Hellwig X-Patchwork-Id: 11015689 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 1416C112C for ; Tue, 25 Jun 2019 14:38:23 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 0521F28640 for ; Tue, 25 Jun 2019 14:38:23 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id ED87128657; Tue, 25 Jun 2019 14:38:22 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-2.7 required=2.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,MAILING_LIST_MULTI,RCVD_IN_DNSWL_NONE autolearn=ham version=3.3.1 Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 2299A283EE for ; Tue, 25 Jun 2019 14:38:22 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id DB8F18E0009; Tue, 25 Jun 2019 10:38:04 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id C20278E000D; Tue, 25 Jun 2019 10:38:04 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id A9DEE8E0009; Tue, 25 Jun 2019 10:38:04 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from mail-pg1-f197.google.com (mail-pg1-f197.google.com [209.85.215.197]) by kanga.kvack.org (Postfix) with ESMTP id 62D8D8E000B for ; Tue, 25 Jun 2019 10:38:04 -0400 (EDT) Received: by mail-pg1-f197.google.com with SMTP id s195so11776592pgs.13 for ; Tue, 25 Jun 2019 07:38:04 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:dkim-signature:from:to:cc:subject:date :message-id:in-reply-to:references:mime-version :content-transfer-encoding; bh=cNb5LAit4OQE9csyW8Gi3qb52ub8bWsatrdoLvwwMmk=; b=CRYIT5gMpQdSybLp4KeKi8vyYOOA//zoovIMLYJEyq5i1Hc9WpKkf5Fn9xVJbk/p/7 LLPVrveN4HxcRBOrkgE0jMVpjl1rrlhLpLMdPkZ9kSj1yVu2q5hcaNzifdClR0hUfeIN nQF7Ib+ptKXFAPk1QcP5YllOvDh/zXHgm4wvAUOLBDX060ybxta/16RhnSgn0to4Hj1F OItc81YaphlYRTlzcaeAfJZcrjLQPM/kXym1CujMCzA0VeCS5IR/l/7XCdCG1roi8tvn neXpytENcOaFh6kOwFpBgrzEc0WQuXXr7hKy9LaKAAzZnM+Kx1SksTN9H8HNI8y2qdkf 5u/Q== X-Gm-Message-State: APjAAAVo3DGLNA92FCuAK8rl6le34ddgaGX6IYt+Tk/66QGRbRJ6uGP4 /En3cdxG66aNijGN/QHKTrvckK+Pzxy+XfOuMG6IxasLx473yHbdYGzUgB5AC++P9WHgE7eJmcV f7Z725Ot7tTT1S23+A14utAdqUQ+siJ7r4j4a9Y+Ab2C47bGobu85MD/A9O3WAM4= X-Received: by 2002:a17:90a:2228:: with SMTP id c37mr32803630pje.9.1561473484039; Tue, 25 Jun 2019 07:38:04 -0700 (PDT) X-Google-Smtp-Source: APXvYqzLY/r+uzsod5/tLNl7EA5r2Syy/uIUuep2ZkH8BMb1vM0mUqO3y7kMq3AY09Ejvj5rki/r X-Received: by 2002:a17:90a:2228:: with SMTP id c37mr32803541pje.9.1561473483138; Tue, 25 Jun 2019 07:38:03 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1561473483; cv=none; d=google.com; s=arc-20160816; b=CBOmrmZte7xYfeue62n556R35KFMOve3t88XanuDG5j1iNP/yzGw/ZlyYQT0VcegO7 VCplsCnhnxGNI/tETdl/onuFbE7k/BDecfgWz5wp8QA43cnW0IICNSk1POZ8J0oPke7K XQfuX0hU4RoWBB70NbT81uDaE2RYvaM5mwpMaEWU2tlN+/dvUcT5CMLOCx0X6XAGryRH wN3x1pDcOu7qUcFUPKc6F1g00xOiNB/CzgoAdHJb71D1zitkSD4By201ZFK8nrwyLzJA kKpRHP+4TGb85c/ltBLyKYh80GJFAuGNx9fDxeeSFKzisbTtnommp0t593AWvXI8I4ux vPDw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:dkim-signature; bh=cNb5LAit4OQE9csyW8Gi3qb52ub8bWsatrdoLvwwMmk=; b=H3j+bsedpMt2R7fZNETqkqWH7y9GFkm3PnhNCht7futPZzZEsalfJ2VJrECc7ukiL4 uhK/7BBWoW6acIdD8nSCqfIUebbVEiqq/EgnO9Mb6fvknUEoWY5KEX3nHiJdEdssINvq ZmdURZoLfGvSvlW8ajrNLcVMRokCqFp9iyX/WR137MQAEUNQFNS2A+XHL0OqNW/LdjRA E1fCLjk2nRFVxOd2BZz/oTMtRw47U6VLyj4gEweOTJTc68JfOkBsPxcfXfldiZtePIwO bSuLcKBBw4Om6g0lAXV5b+dEHQ8m9DHE0hVj0BmlynEURkgjyO60wGVBgdc5hEW0I6pe SEuA== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@infradead.org header.s=bombadil.20170209 header.b=hU21sXIG; spf=pass (google.com: best guess record for domain of batv+c5155a46dc30cc8634d8+5784+infradead.org+hch@bombadil.srs.infradead.org designates 2607:7c80:54:e::133 as permitted sender) smtp.mailfrom=BATV+c5155a46dc30cc8634d8+5784+infradead.org+hch@bombadil.srs.infradead.org Received: from bombadil.infradead.org (bombadil.infradead.org. [2607:7c80:54:e::133]) by mx.google.com with ESMTPS id i3si453240plt.306.2019.06.25.07.38.03 for (version=TLS1_3 cipher=AEAD-AES256-GCM-SHA384 bits=256/256); Tue, 25 Jun 2019 07:38:03 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of batv+c5155a46dc30cc8634d8+5784+infradead.org+hch@bombadil.srs.infradead.org designates 2607:7c80:54:e::133 as permitted sender) client-ip=2607:7c80:54:e::133; Authentication-Results: mx.google.com; dkim=pass header.i=@infradead.org header.s=bombadil.20170209 header.b=hU21sXIG; spf=pass (google.com: best guess record for domain of batv+c5155a46dc30cc8634d8+5784+infradead.org+hch@bombadil.srs.infradead.org designates 2607:7c80:54:e::133 as permitted sender) smtp.mailfrom=BATV+c5155a46dc30cc8634d8+5784+infradead.org+hch@bombadil.srs.infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=bombadil.20170209; h=Content-Transfer-Encoding: MIME-Version:References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender :Reply-To:Content-Type:Content-ID:Content-Description:Resent-Date:Resent-From :Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Id:List-Help: List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive; bh=cNb5LAit4OQE9csyW8Gi3qb52ub8bWsatrdoLvwwMmk=; b=hU21sXIGXyk/mxw1HaN+6x5I8z REbMk/9AO2/RX7OUJZAz7uatsrDjVraOr2R7w7o1uB8bi/MLY5BSeHDuWcSqGT5qb92FfOOrlqA2v xJ2Iaq6iLlaonDvnyUn5rB14ArrIZJHdVpacxQ/NwA3kG90jz2cLueycWPfxq6Rc616kRcKbJbsFJ 9M7rzU2H2ztN2yfxRYs2U9z7anC8kgI5e72AOG7KgDREQgaJjpXZnyVd6ke/dfDgqq7QTq53Ii19X eiU1xmwauAPSmID0U5JtNbc8tf9F59Gh1UqNi4XjkgpBeJBDDaUvUaCvKXGZqrZ6jCSA0xo+6Fb3C eS+bPF9g==; Received: from 213-225-6-159.nat.highway.a1.net ([213.225.6.159] helo=localhost) by bombadil.infradead.org with esmtpsa (Exim 4.92 #3 (Red Hat Linux)) id 1hfmZy-00081q-2P; Tue, 25 Jun 2019 14:37:50 +0000 From: Christoph Hellwig To: Andrew Morton , Linus Torvalds , Paul Burton , James Hogan , Yoshinori Sato , Rich Felker , "David S. Miller" Cc: Nicholas Piggin , Khalid Aziz , Andrey Konovalov , Benjamin Herrenschmidt , Paul Mackerras , Michael Ellerman , linux-mips@vger.kernel.org, linux-sh@vger.kernel.org, sparclinux@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, linux-mm@kvack.org, x86@kernel.org, linux-kernel@vger.kernel.org, Jason Gunthorpe Subject: [PATCH 10/16] mm: rename CONFIG_HAVE_GENERIC_GUP to CONFIG_HAVE_FAST_GUP Date: Tue, 25 Jun 2019 16:37:09 +0200 Message-Id: <20190625143715.1689-11-hch@lst.de> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20190625143715.1689-1-hch@lst.de> References: <20190625143715.1689-1-hch@lst.de> MIME-Version: 1.0 X-SRS-Rewrite: SMTP reverse-path rewritten from by bombadil.infradead.org. See http://www.infradead.org/rpr.html X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: X-Virus-Scanned: ClamAV using ClamSMTP We only support the generic GUP now, so rename the config option to be more clear, and always use the mm/Kconfig definition of the symbol and select it from the arch Kconfigs. Signed-off-by: Christoph Hellwig Reviewed-by: Khalid Aziz Reviewed-by: Jason Gunthorpe --- arch/arm/Kconfig | 5 +---- arch/arm64/Kconfig | 4 +--- arch/mips/Kconfig | 2 +- arch/powerpc/Kconfig | 2 +- arch/s390/Kconfig | 2 +- arch/sh/Kconfig | 2 +- arch/sparc/Kconfig | 2 +- arch/x86/Kconfig | 4 +--- mm/Kconfig | 2 +- mm/gup.c | 4 ++-- 10 files changed, 11 insertions(+), 18 deletions(-) diff --git a/arch/arm/Kconfig b/arch/arm/Kconfig index 8869742a85df..3879a3e2c511 100644 --- a/arch/arm/Kconfig +++ b/arch/arm/Kconfig @@ -73,6 +73,7 @@ config ARM select HAVE_DYNAMIC_FTRACE_WITH_REGS if HAVE_DYNAMIC_FTRACE select HAVE_EFFICIENT_UNALIGNED_ACCESS if (CPU_V6 || CPU_V6K || CPU_V7) && MMU select HAVE_EXIT_THREAD + select HAVE_FAST_GUP if ARM_LPAE select HAVE_FTRACE_MCOUNT_RECORD if !XIP_KERNEL select HAVE_FUNCTION_GRAPH_TRACER if !THUMB2_KERNEL && !CC_IS_CLANG select HAVE_FUNCTION_TRACER if !XIP_KERNEL @@ -1596,10 +1597,6 @@ config ARCH_SELECT_MEMORY_MODEL config HAVE_ARCH_PFN_VALID def_bool ARCH_HAS_HOLES_MEMORYMODEL || !SPARSEMEM -config HAVE_GENERIC_GUP - def_bool y - depends on ARM_LPAE - config HIGHMEM bool "High Memory Support" depends on MMU diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig index 697ea0510729..4a6ee3e92757 100644 --- a/arch/arm64/Kconfig +++ b/arch/arm64/Kconfig @@ -140,6 +140,7 @@ config ARM64 select HAVE_DMA_CONTIGUOUS select HAVE_DYNAMIC_FTRACE select HAVE_EFFICIENT_UNALIGNED_ACCESS + select HAVE_FAST_GUP select HAVE_FTRACE_MCOUNT_RECORD select HAVE_FUNCTION_TRACER select HAVE_FUNCTION_GRAPH_TRACER @@ -262,9 +263,6 @@ config GENERIC_CALIBRATE_DELAY config ZONE_DMA32 def_bool y -config HAVE_GENERIC_GUP - def_bool y - config ARCH_ENABLE_MEMORY_HOTPLUG def_bool y diff --git a/arch/mips/Kconfig b/arch/mips/Kconfig index 64108a2a16d4..b1e42f0e4ed0 100644 --- a/arch/mips/Kconfig +++ b/arch/mips/Kconfig @@ -54,10 +54,10 @@ config MIPS select HAVE_DMA_CONTIGUOUS select HAVE_DYNAMIC_FTRACE select HAVE_EXIT_THREAD + select HAVE_FAST_GUP select HAVE_FTRACE_MCOUNT_RECORD select HAVE_FUNCTION_GRAPH_TRACER select HAVE_FUNCTION_TRACER - select HAVE_GENERIC_GUP select HAVE_IDE select HAVE_IOREMAP_PROT select HAVE_IRQ_EXIT_ON_IRQ_STACK diff --git a/arch/powerpc/Kconfig b/arch/powerpc/Kconfig index 8c1c636308c8..992a04796e56 100644 --- a/arch/powerpc/Kconfig +++ b/arch/powerpc/Kconfig @@ -185,12 +185,12 @@ config PPC select HAVE_DYNAMIC_FTRACE_WITH_REGS if MPROFILE_KERNEL select HAVE_EBPF_JIT if PPC64 select HAVE_EFFICIENT_UNALIGNED_ACCESS if !(CPU_LITTLE_ENDIAN && POWER7_CPU) + select HAVE_FAST_GUP select HAVE_FTRACE_MCOUNT_RECORD select HAVE_FUNCTION_ERROR_INJECTION select HAVE_FUNCTION_GRAPH_TRACER select HAVE_FUNCTION_TRACER select HAVE_GCC_PLUGINS if GCC_VERSION >= 50200 # plugin support on gcc <= 5.1 is buggy on PPC - select HAVE_GENERIC_GUP select HAVE_HW_BREAKPOINT if PERF_EVENTS && (PPC_BOOK3S || PPC_8xx) select HAVE_IDE select HAVE_IOREMAP_PROT diff --git a/arch/s390/Kconfig b/arch/s390/Kconfig index 109243fdb6ec..aaff0376bf53 100644 --- a/arch/s390/Kconfig +++ b/arch/s390/Kconfig @@ -137,6 +137,7 @@ config S390 select HAVE_DMA_CONTIGUOUS select HAVE_DYNAMIC_FTRACE select HAVE_DYNAMIC_FTRACE_WITH_REGS + select HAVE_FAST_GUP select HAVE_EFFICIENT_UNALIGNED_ACCESS select HAVE_FENTRY select HAVE_FTRACE_MCOUNT_RECORD @@ -144,7 +145,6 @@ config S390 select HAVE_FUNCTION_TRACER select HAVE_FUTEX_CMPXCHG if FUTEX select HAVE_GCC_PLUGINS - select HAVE_GENERIC_GUP select HAVE_KERNEL_BZIP2 select HAVE_KERNEL_GZIP select HAVE_KERNEL_LZ4 diff --git a/arch/sh/Kconfig b/arch/sh/Kconfig index 6fddfc3c9710..f6b15ecc37a3 100644 --- a/arch/sh/Kconfig +++ b/arch/sh/Kconfig @@ -14,7 +14,7 @@ config SUPERH select HAVE_ARCH_TRACEHOOK select HAVE_PERF_EVENTS select HAVE_DEBUG_BUGVERBOSE - select HAVE_GENERIC_GUP + select HAVE_FAST_GUP if MMU select ARCH_HAVE_CUSTOM_GPIO_H select ARCH_HAVE_NMI_SAFE_CMPXCHG if (GUSA_RB || CPU_SH4A) select ARCH_HAS_GCOV_PROFILE_ALL diff --git a/arch/sparc/Kconfig b/arch/sparc/Kconfig index 22435471f942..659232b760e1 100644 --- a/arch/sparc/Kconfig +++ b/arch/sparc/Kconfig @@ -28,7 +28,7 @@ config SPARC select RTC_DRV_M48T59 select RTC_SYSTOHC select HAVE_ARCH_JUMP_LABEL if SPARC64 - select HAVE_GENERIC_GUP if SPARC64 + select HAVE_FAST_GUP if SPARC64 select GENERIC_IRQ_SHOW select ARCH_WANT_IPC_PARSE_VERSION select GENERIC_PCI_IOMAP diff --git a/arch/x86/Kconfig b/arch/x86/Kconfig index 7cd53cc59f0f..44500e0ed630 100644 --- a/arch/x86/Kconfig +++ b/arch/x86/Kconfig @@ -157,6 +157,7 @@ config X86 select HAVE_EFFICIENT_UNALIGNED_ACCESS select HAVE_EISA select HAVE_EXIT_THREAD + select HAVE_FAST_GUP select HAVE_FENTRY if X86_64 || DYNAMIC_FTRACE select HAVE_FTRACE_MCOUNT_RECORD select HAVE_FUNCTION_GRAPH_TRACER @@ -2874,9 +2875,6 @@ config HAVE_ATOMIC_IOMAP config X86_DEV_DMA_OPS bool -config HAVE_GENERIC_GUP - def_bool y - source "drivers/firmware/Kconfig" source "arch/x86/kvm/Kconfig" diff --git a/mm/Kconfig b/mm/Kconfig index fe51f104a9e0..98dffb0f2447 100644 --- a/mm/Kconfig +++ b/mm/Kconfig @@ -132,7 +132,7 @@ config HAVE_MEMBLOCK_NODE_MAP config HAVE_MEMBLOCK_PHYS_MAP bool -config HAVE_GENERIC_GUP +config HAVE_FAST_GUP bool config ARCH_KEEP_MEMBLOCK diff --git a/mm/gup.c b/mm/gup.c index 9b72f2ea3471..7328890ad8d3 100644 --- a/mm/gup.c +++ b/mm/gup.c @@ -1651,7 +1651,7 @@ struct page *get_dump_page(unsigned long addr) #endif /* CONFIG_ELF_CORE */ /* - * Generic Fast GUP + * Fast GUP * * get_user_pages_fast attempts to pin user pages by walking the page * tables directly and avoids taking locks. Thus the walker needs to be @@ -1683,7 +1683,7 @@ struct page *get_dump_page(unsigned long addr) * * This code is based heavily on the PowerPC implementation by Nick Piggin. */ -#ifdef CONFIG_HAVE_GENERIC_GUP +#ifdef CONFIG_HAVE_FAST_GUP #ifdef CONFIG_GUP_GET_PTE_LOW_HIGH /* * WARNING: only to be used in the get_user_pages_fast() implementation. From patchwork Tue Jun 25 14:37:10 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Christoph Hellwig X-Patchwork-Id: 11015703 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 6F7F71398 for ; Tue, 25 Jun 2019 14:38:27 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 60F6E27F97 for ; Tue, 25 Jun 2019 14:38:27 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 5502528640; Tue, 25 Jun 2019 14:38:27 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-2.7 required=2.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,MAILING_LIST_MULTI,RCVD_IN_DNSWL_NONE autolearn=unavailable version=3.3.1 Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 3D9AE27F97 for ; Tue, 25 Jun 2019 14:38:26 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id CE5848E000D; Tue, 25 Jun 2019 10:38:12 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id C488F8E000B; Tue, 25 Jun 2019 10:38:12 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id A74948E000D; Tue, 25 Jun 2019 10:38:12 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from mail-pg1-f198.google.com (mail-pg1-f198.google.com [209.85.215.198]) by kanga.kvack.org (Postfix) with ESMTP id 5B3C68E000B for ; Tue, 25 Jun 2019 10:38:12 -0400 (EDT) Received: by mail-pg1-f198.google.com with SMTP id w22so2794364pgc.20 for ; Tue, 25 Jun 2019 07:38:12 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:dkim-signature:from:to:cc:subject:date :message-id:in-reply-to:references:mime-version :content-transfer-encoding; bh=7Nw8nB685y+6LL8lV2B7CzUbhlE0EkQRUBSQFuFoZsU=; b=r1vYw5XxdA1revt4O/otMd/KnugM+hGoCl5Ax30MePbxr0OlmtqNneMaZGhKEjcEiJ WMtXGPSdGfq8DxKP+g/9m8Po31mWJGeforoeHt0w2fENZzlTor7fn2nkmLKztIOWJPDy owIpuimqTjKNLnrIfT0p9QJwDEgUcSs5glEIYhCHYL4ggVzaXBE1vgTS5KIpYISYIvDf PDn6Hcnkv1UcUhYncel1V49IMiDYFLdQW9zSJ1ceKs7fTij6P0aZkK0L/n6jSaOR0k32 glcmqgJNfN62cfilp2HqV2RCPIUP8mdxLqfDOPtIHlklUOMqNhet8tHeh/Ga6w4m17vu f/hA== X-Gm-Message-State: APjAAAVsFF2SIdgDkZRIU0LeggqSH8a7AxWExbWpN+D7nV5QZgakJOiy Du6PlqYLv4JIFO4wlotgeySScmNv9kvSVkd370/e1hBYvKmr5aXLS0tUhVNqGR8Jm7th3mjHxnk 5rXftGU1fuAR5pifWSngIb+/g5eryzlipkAasAUGG7jL3gDcRDeRkbgXfKbmuUYY= X-Received: by 2002:a17:902:8f93:: with SMTP id z19mr67341937plo.97.1561473491985; Tue, 25 Jun 2019 07:38:11 -0700 (PDT) X-Google-Smtp-Source: APXvYqxPEIbG+KZDfL/PQeJ5bLxrSfX1/FAMr/OIlalJzhqy4TNhebZcYmctxD0Y9AZSDGNBCwgW X-Received: by 2002:a17:902:8f93:: with SMTP id z19mr67341808plo.97.1561473490568; Tue, 25 Jun 2019 07:38:10 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1561473490; cv=none; d=google.com; s=arc-20160816; b=Nu+tlx4m8Wx6pvbbh7LKAn//sD66/apWIoSXCKOJ+694kjpgNovL23rRnFDPHFLP1i tKxSLdCmXzXhlUVGtJfrn51tNWnfMUrpsLLJyWGf5hEO0fLBEUr3WEr7/g7WEJ6XlIzk 3ZiIL5l/uGXfAjCimYkV5nMomNPgOfqIL1Cvu30Qz4BQnZJYOB1ViIqwrPCAdMynBKld Ss4SclCNzYMC07xm4/VmGLbjBdhlTt8FIz9MGwBfTNFVNnc7tiX4YHYCLKBH/uoTsTG6 Xizs+3aVEEfXBY6cW4ICkblw5NuUzL1EW5FpXd2AbZSdA9p99IHvRdZVOflK17vikzzq zSHQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:dkim-signature; bh=7Nw8nB685y+6LL8lV2B7CzUbhlE0EkQRUBSQFuFoZsU=; b=bo0c1BVXXPgUFiT+coUIjVquZUYyKAg/KLcDInlslOdRfkplialBN448Df8UVqcfZS 5XyBRpW6u+ofOyX67BCd5tGqq2oozhnT+Cwk0+nxy6+ZS16gfL9CyKOf70ZGhBEiFrhy DvSSuLQUXq8jVPXVrcieK0eXXXEvdvcMdh4hcZae+OmqRnLovckLne9P4ysKY5oeQIg6 w2USbwNwWHeEN7ETUARGds0fRtPucPMNw+aGQUP+haOuKsdjB8ZxxzabWxlP7rEX731n dga0KGhAGGPl4Oz0oVRz0qh7SthqbGmLMfZPbQk2H8lq3YM9pPFLUpauv2/4QD06O8Ms Ac/w== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@infradead.org header.s=bombadil.20170209 header.b=F3FutycM; spf=pass (google.com: best guess record for domain of batv+c5155a46dc30cc8634d8+5784+infradead.org+hch@bombadil.srs.infradead.org designates 2607:7c80:54:e::133 as permitted sender) smtp.mailfrom=BATV+c5155a46dc30cc8634d8+5784+infradead.org+hch@bombadil.srs.infradead.org Received: from bombadil.infradead.org (bombadil.infradead.org. [2607:7c80:54:e::133]) by mx.google.com with ESMTPS id k1si3102140pjt.70.2019.06.25.07.38.10 for (version=TLS1_3 cipher=AEAD-AES256-GCM-SHA384 bits=256/256); Tue, 25 Jun 2019 07:38:10 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of batv+c5155a46dc30cc8634d8+5784+infradead.org+hch@bombadil.srs.infradead.org designates 2607:7c80:54:e::133 as permitted sender) client-ip=2607:7c80:54:e::133; Authentication-Results: mx.google.com; dkim=pass header.i=@infradead.org header.s=bombadil.20170209 header.b=F3FutycM; spf=pass (google.com: best guess record for domain of batv+c5155a46dc30cc8634d8+5784+infradead.org+hch@bombadil.srs.infradead.org designates 2607:7c80:54:e::133 as permitted sender) smtp.mailfrom=BATV+c5155a46dc30cc8634d8+5784+infradead.org+hch@bombadil.srs.infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=bombadil.20170209; h=Content-Transfer-Encoding: MIME-Version:References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender :Reply-To:Content-Type:Content-ID:Content-Description:Resent-Date:Resent-From :Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Id:List-Help: List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive; bh=7Nw8nB685y+6LL8lV2B7CzUbhlE0EkQRUBSQFuFoZsU=; b=F3FutycMquHOrZwEo5Hnp322WX fqzMvokVNVRzLB78DDnSqYuk+SrJZBcnXDtKGd440OdqADtXmgaqppIxd/mAjimCV/kEc+J+QKKp3 vvJowGT3P3zBSrvH7uE/Xwlkme2oGngihWHwjaGrhqfqky7B3VBf6Ftl6j8zjKphGtVEKag5f+EfG GMm3GpVKOxv0RZ5UWe3hPFE0uALmLpOMokws/zVIihukxy8JGCpPg2YU+ORzLsQJKbrBQ3yxn87KE hyAbFL81krptwJXtt7ZWqGB+S05j4f+Is17FuWAxf5lgaoSoIwQ/bINlV1YOGUywljUt+625GCdYC oeZErBYw==; Received: from 213-225-6-159.nat.highway.a1.net ([213.225.6.159] helo=localhost) by bombadil.infradead.org with esmtpsa (Exim 4.92 #3 (Red Hat Linux)) id 1hfma1-00082h-Dc; Tue, 25 Jun 2019 14:37:54 +0000 From: Christoph Hellwig To: Andrew Morton , Linus Torvalds , Paul Burton , James Hogan , Yoshinori Sato , Rich Felker , "David S. Miller" Cc: Nicholas Piggin , Khalid Aziz , Andrey Konovalov , Benjamin Herrenschmidt , Paul Mackerras , Michael Ellerman , linux-mips@vger.kernel.org, linux-sh@vger.kernel.org, sparclinux@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, linux-mm@kvack.org, x86@kernel.org, linux-kernel@vger.kernel.org Subject: [PATCH 11/16] mm: reorder code blocks in gup.c Date: Tue, 25 Jun 2019 16:37:10 +0200 Message-Id: <20190625143715.1689-12-hch@lst.de> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20190625143715.1689-1-hch@lst.de> References: <20190625143715.1689-1-hch@lst.de> MIME-Version: 1.0 X-SRS-Rewrite: SMTP reverse-path rewritten from by bombadil.infradead.org. See http://www.infradead.org/rpr.html X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: X-Virus-Scanned: ClamAV using ClamSMTP This moves the actually exported functions towards the end of the file, and reorders some functions to be in more logical blocks as a preparation for moving various stubs inline into the main functionality using IS_ENABLED(). Signed-off-by: Christoph Hellwig --- mm/gup.c | 410 +++++++++++++++++++++++++++---------------------------- 1 file changed, 205 insertions(+), 205 deletions(-) diff --git a/mm/gup.c b/mm/gup.c index 7328890ad8d3..b29249581672 100644 --- a/mm/gup.c +++ b/mm/gup.c @@ -1100,86 +1100,6 @@ static __always_inline long __get_user_pages_locked(struct task_struct *tsk, return pages_done; } -/* - * We can leverage the VM_FAULT_RETRY functionality in the page fault - * paths better by using either get_user_pages_locked() or - * get_user_pages_unlocked(). - * - * get_user_pages_locked() is suitable to replace the form: - * - * down_read(&mm->mmap_sem); - * do_something() - * get_user_pages(tsk, mm, ..., pages, NULL); - * up_read(&mm->mmap_sem); - * - * to: - * - * int locked = 1; - * down_read(&mm->mmap_sem); - * do_something() - * get_user_pages_locked(tsk, mm, ..., pages, &locked); - * if (locked) - * up_read(&mm->mmap_sem); - */ -long get_user_pages_locked(unsigned long start, unsigned long nr_pages, - unsigned int gup_flags, struct page **pages, - int *locked) -{ - /* - * FIXME: Current FOLL_LONGTERM behavior is incompatible with - * FAULT_FLAG_ALLOW_RETRY because of the FS DAX check requirement on - * vmas. As there are no users of this flag in this call we simply - * disallow this option for now. - */ - if (WARN_ON_ONCE(gup_flags & FOLL_LONGTERM)) - return -EINVAL; - - return __get_user_pages_locked(current, current->mm, start, nr_pages, - pages, NULL, locked, - gup_flags | FOLL_TOUCH); -} -EXPORT_SYMBOL(get_user_pages_locked); - -/* - * get_user_pages_unlocked() is suitable to replace the form: - * - * down_read(&mm->mmap_sem); - * get_user_pages(tsk, mm, ..., pages, NULL); - * up_read(&mm->mmap_sem); - * - * with: - * - * get_user_pages_unlocked(tsk, mm, ..., pages); - * - * It is functionally equivalent to get_user_pages_fast so - * get_user_pages_fast should be used instead if specific gup_flags - * (e.g. FOLL_FORCE) are not required. - */ -long get_user_pages_unlocked(unsigned long start, unsigned long nr_pages, - struct page **pages, unsigned int gup_flags) -{ - struct mm_struct *mm = current->mm; - int locked = 1; - long ret; - - /* - * FIXME: Current FOLL_LONGTERM behavior is incompatible with - * FAULT_FLAG_ALLOW_RETRY because of the FS DAX check requirement on - * vmas. As there are no users of this flag in this call we simply - * disallow this option for now. - */ - if (WARN_ON_ONCE(gup_flags & FOLL_LONGTERM)) - return -EINVAL; - - down_read(&mm->mmap_sem); - ret = __get_user_pages_locked(current, mm, start, nr_pages, pages, NULL, - &locked, gup_flags | FOLL_TOUCH); - if (locked) - up_read(&mm->mmap_sem); - return ret; -} -EXPORT_SYMBOL(get_user_pages_unlocked); - /* * get_user_pages_remote() - pin user pages in memory * @tsk: the task_struct to use for page fault accounting, or @@ -1256,6 +1176,153 @@ long get_user_pages_remote(struct task_struct *tsk, struct mm_struct *mm, } EXPORT_SYMBOL(get_user_pages_remote); +/** + * populate_vma_page_range() - populate a range of pages in the vma. + * @vma: target vma + * @start: start address + * @end: end address + * @nonblocking: + * + * This takes care of mlocking the pages too if VM_LOCKED is set. + * + * return 0 on success, negative error code on error. + * + * vma->vm_mm->mmap_sem must be held. + * + * If @nonblocking is NULL, it may be held for read or write and will + * be unperturbed. + * + * If @nonblocking is non-NULL, it must held for read only and may be + * released. If it's released, *@nonblocking will be set to 0. + */ +long populate_vma_page_range(struct vm_area_struct *vma, + unsigned long start, unsigned long end, int *nonblocking) +{ + struct mm_struct *mm = vma->vm_mm; + unsigned long nr_pages = (end - start) / PAGE_SIZE; + int gup_flags; + + VM_BUG_ON(start & ~PAGE_MASK); + VM_BUG_ON(end & ~PAGE_MASK); + VM_BUG_ON_VMA(start < vma->vm_start, vma); + VM_BUG_ON_VMA(end > vma->vm_end, vma); + VM_BUG_ON_MM(!rwsem_is_locked(&mm->mmap_sem), mm); + + gup_flags = FOLL_TOUCH | FOLL_POPULATE | FOLL_MLOCK; + if (vma->vm_flags & VM_LOCKONFAULT) + gup_flags &= ~FOLL_POPULATE; + /* + * We want to touch writable mappings with a write fault in order + * to break COW, except for shared mappings because these don't COW + * and we would not want to dirty them for nothing. + */ + if ((vma->vm_flags & (VM_WRITE | VM_SHARED)) == VM_WRITE) + gup_flags |= FOLL_WRITE; + + /* + * We want mlock to succeed for regions that have any permissions + * other than PROT_NONE. + */ + if (vma->vm_flags & (VM_READ | VM_WRITE | VM_EXEC)) + gup_flags |= FOLL_FORCE; + + /* + * We made sure addr is within a VMA, so the following will + * not result in a stack expansion that recurses back here. + */ + return __get_user_pages(current, mm, start, nr_pages, gup_flags, + NULL, NULL, nonblocking); +} + +/* + * __mm_populate - populate and/or mlock pages within a range of address space. + * + * This is used to implement mlock() and the MAP_POPULATE / MAP_LOCKED mmap + * flags. VMAs must be already marked with the desired vm_flags, and + * mmap_sem must not be held. + */ +int __mm_populate(unsigned long start, unsigned long len, int ignore_errors) +{ + struct mm_struct *mm = current->mm; + unsigned long end, nstart, nend; + struct vm_area_struct *vma = NULL; + int locked = 0; + long ret = 0; + + end = start + len; + + for (nstart = start; nstart < end; nstart = nend) { + /* + * We want to fault in pages for [nstart; end) address range. + * Find first corresponding VMA. + */ + if (!locked) { + locked = 1; + down_read(&mm->mmap_sem); + vma = find_vma(mm, nstart); + } else if (nstart >= vma->vm_end) + vma = vma->vm_next; + if (!vma || vma->vm_start >= end) + break; + /* + * Set [nstart; nend) to intersection of desired address + * range with the first VMA. Also, skip undesirable VMA types. + */ + nend = min(end, vma->vm_end); + if (vma->vm_flags & (VM_IO | VM_PFNMAP)) + continue; + if (nstart < vma->vm_start) + nstart = vma->vm_start; + /* + * Now fault in a range of pages. populate_vma_page_range() + * double checks the vma flags, so that it won't mlock pages + * if the vma was already munlocked. + */ + ret = populate_vma_page_range(vma, nstart, nend, &locked); + if (ret < 0) { + if (ignore_errors) { + ret = 0; + continue; /* continue at next VMA */ + } + break; + } + nend = nstart + ret * PAGE_SIZE; + ret = 0; + } + if (locked) + up_read(&mm->mmap_sem); + return ret; /* 0 or negative error code */ +} + +/** + * get_dump_page() - pin user page in memory while writing it to core dump + * @addr: user address + * + * Returns struct page pointer of user page pinned for dump, + * to be freed afterwards by put_page(). + * + * Returns NULL on any kind of failure - a hole must then be inserted into + * the corefile, to preserve alignment with its headers; and also returns + * NULL wherever the ZERO_PAGE, or an anonymous pte_none, has been found - + * allowing a hole to be left in the corefile to save diskspace. + * + * Called without mmap_sem, but after all other threads have been killed. + */ +#ifdef CONFIG_ELF_CORE +struct page *get_dump_page(unsigned long addr) +{ + struct vm_area_struct *vma; + struct page *page; + + if (__get_user_pages(current, current->mm, addr, 1, + FOLL_FORCE | FOLL_DUMP | FOLL_GET, &page, &vma, + NULL) < 1) + return NULL; + flush_cache_page(vma, addr, page_to_pfn(page)); + return page; +} +#endif /* CONFIG_ELF_CORE */ + #if defined(CONFIG_FS_DAX) || defined (CONFIG_CMA) static bool check_dax_vmas(struct vm_area_struct **vmas, long nr_pages) { @@ -1503,152 +1570,85 @@ long get_user_pages(unsigned long start, unsigned long nr_pages, } EXPORT_SYMBOL(get_user_pages); -/** - * populate_vma_page_range() - populate a range of pages in the vma. - * @vma: target vma - * @start: start address - * @end: end address - * @nonblocking: - * - * This takes care of mlocking the pages too if VM_LOCKED is set. +/* + * We can leverage the VM_FAULT_RETRY functionality in the page fault + * paths better by using either get_user_pages_locked() or + * get_user_pages_unlocked(). * - * return 0 on success, negative error code on error. + * get_user_pages_locked() is suitable to replace the form: * - * vma->vm_mm->mmap_sem must be held. + * down_read(&mm->mmap_sem); + * do_something() + * get_user_pages(tsk, mm, ..., pages, NULL); + * up_read(&mm->mmap_sem); * - * If @nonblocking is NULL, it may be held for read or write and will - * be unperturbed. + * to: * - * If @nonblocking is non-NULL, it must held for read only and may be - * released. If it's released, *@nonblocking will be set to 0. + * int locked = 1; + * down_read(&mm->mmap_sem); + * do_something() + * get_user_pages_locked(tsk, mm, ..., pages, &locked); + * if (locked) + * up_read(&mm->mmap_sem); */ -long populate_vma_page_range(struct vm_area_struct *vma, - unsigned long start, unsigned long end, int *nonblocking) +long get_user_pages_locked(unsigned long start, unsigned long nr_pages, + unsigned int gup_flags, struct page **pages, + int *locked) { - struct mm_struct *mm = vma->vm_mm; - unsigned long nr_pages = (end - start) / PAGE_SIZE; - int gup_flags; - - VM_BUG_ON(start & ~PAGE_MASK); - VM_BUG_ON(end & ~PAGE_MASK); - VM_BUG_ON_VMA(start < vma->vm_start, vma); - VM_BUG_ON_VMA(end > vma->vm_end, vma); - VM_BUG_ON_MM(!rwsem_is_locked(&mm->mmap_sem), mm); - - gup_flags = FOLL_TOUCH | FOLL_POPULATE | FOLL_MLOCK; - if (vma->vm_flags & VM_LOCKONFAULT) - gup_flags &= ~FOLL_POPULATE; - /* - * We want to touch writable mappings with a write fault in order - * to break COW, except for shared mappings because these don't COW - * and we would not want to dirty them for nothing. - */ - if ((vma->vm_flags & (VM_WRITE | VM_SHARED)) == VM_WRITE) - gup_flags |= FOLL_WRITE; - /* - * We want mlock to succeed for regions that have any permissions - * other than PROT_NONE. + * FIXME: Current FOLL_LONGTERM behavior is incompatible with + * FAULT_FLAG_ALLOW_RETRY because of the FS DAX check requirement on + * vmas. As there are no users of this flag in this call we simply + * disallow this option for now. */ - if (vma->vm_flags & (VM_READ | VM_WRITE | VM_EXEC)) - gup_flags |= FOLL_FORCE; + if (WARN_ON_ONCE(gup_flags & FOLL_LONGTERM)) + return -EINVAL; - /* - * We made sure addr is within a VMA, so the following will - * not result in a stack expansion that recurses back here. - */ - return __get_user_pages(current, mm, start, nr_pages, gup_flags, - NULL, NULL, nonblocking); + return __get_user_pages_locked(current, current->mm, start, nr_pages, + pages, NULL, locked, + gup_flags | FOLL_TOUCH); } +EXPORT_SYMBOL(get_user_pages_locked); /* - * __mm_populate - populate and/or mlock pages within a range of address space. + * get_user_pages_unlocked() is suitable to replace the form: * - * This is used to implement mlock() and the MAP_POPULATE / MAP_LOCKED mmap - * flags. VMAs must be already marked with the desired vm_flags, and - * mmap_sem must not be held. + * down_read(&mm->mmap_sem); + * get_user_pages(tsk, mm, ..., pages, NULL); + * up_read(&mm->mmap_sem); + * + * with: + * + * get_user_pages_unlocked(tsk, mm, ..., pages); + * + * It is functionally equivalent to get_user_pages_fast so + * get_user_pages_fast should be used instead if specific gup_flags + * (e.g. FOLL_FORCE) are not required. */ -int __mm_populate(unsigned long start, unsigned long len, int ignore_errors) +long get_user_pages_unlocked(unsigned long start, unsigned long nr_pages, + struct page **pages, unsigned int gup_flags) { struct mm_struct *mm = current->mm; - unsigned long end, nstart, nend; - struct vm_area_struct *vma = NULL; - int locked = 0; - long ret = 0; + int locked = 1; + long ret; - end = start + len; + /* + * FIXME: Current FOLL_LONGTERM behavior is incompatible with + * FAULT_FLAG_ALLOW_RETRY because of the FS DAX check requirement on + * vmas. As there are no users of this flag in this call we simply + * disallow this option for now. + */ + if (WARN_ON_ONCE(gup_flags & FOLL_LONGTERM)) + return -EINVAL; - for (nstart = start; nstart < end; nstart = nend) { - /* - * We want to fault in pages for [nstart; end) address range. - * Find first corresponding VMA. - */ - if (!locked) { - locked = 1; - down_read(&mm->mmap_sem); - vma = find_vma(mm, nstart); - } else if (nstart >= vma->vm_end) - vma = vma->vm_next; - if (!vma || vma->vm_start >= end) - break; - /* - * Set [nstart; nend) to intersection of desired address - * range with the first VMA. Also, skip undesirable VMA types. - */ - nend = min(end, vma->vm_end); - if (vma->vm_flags & (VM_IO | VM_PFNMAP)) - continue; - if (nstart < vma->vm_start) - nstart = vma->vm_start; - /* - * Now fault in a range of pages. populate_vma_page_range() - * double checks the vma flags, so that it won't mlock pages - * if the vma was already munlocked. - */ - ret = populate_vma_page_range(vma, nstart, nend, &locked); - if (ret < 0) { - if (ignore_errors) { - ret = 0; - continue; /* continue at next VMA */ - } - break; - } - nend = nstart + ret * PAGE_SIZE; - ret = 0; - } + down_read(&mm->mmap_sem); + ret = __get_user_pages_locked(current, mm, start, nr_pages, pages, NULL, + &locked, gup_flags | FOLL_TOUCH); if (locked) up_read(&mm->mmap_sem); - return ret; /* 0 or negative error code */ -} - -/** - * get_dump_page() - pin user page in memory while writing it to core dump - * @addr: user address - * - * Returns struct page pointer of user page pinned for dump, - * to be freed afterwards by put_page(). - * - * Returns NULL on any kind of failure - a hole must then be inserted into - * the corefile, to preserve alignment with its headers; and also returns - * NULL wherever the ZERO_PAGE, or an anonymous pte_none, has been found - - * allowing a hole to be left in the corefile to save diskspace. - * - * Called without mmap_sem, but after all other threads have been killed. - */ -#ifdef CONFIG_ELF_CORE -struct page *get_dump_page(unsigned long addr) -{ - struct vm_area_struct *vma; - struct page *page; - - if (__get_user_pages(current, current->mm, addr, 1, - FOLL_FORCE | FOLL_DUMP | FOLL_GET, &page, &vma, - NULL) < 1) - return NULL; - flush_cache_page(vma, addr, page_to_pfn(page)); - return page; + return ret; } -#endif /* CONFIG_ELF_CORE */ +EXPORT_SYMBOL(get_user_pages_unlocked); /* * Fast GUP From patchwork Tue Jun 25 14:37:11 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Christoph Hellwig X-Patchwork-Id: 11015705 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 3FF27112C for ; Tue, 25 Jun 2019 14:38:31 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 30629283EE for ; Tue, 25 Jun 2019 14:38:31 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 23F3928640; Tue, 25 Jun 2019 14:38:31 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-2.7 required=2.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,MAILING_LIST_MULTI,RCVD_IN_DNSWL_NONE autolearn=unavailable version=3.3.1 Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 3F2B9283EE for ; Tue, 25 Jun 2019 14:38:30 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id B88D58E0010; Tue, 25 Jun 2019 10:38:15 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id AA0168E000B; Tue, 25 Jun 2019 10:38:15 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 8C8638E000F; Tue, 25 Jun 2019 10:38:15 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from mail-pf1-f197.google.com (mail-pf1-f197.google.com [209.85.210.197]) by kanga.kvack.org (Postfix) with ESMTP id 3A0458E000B for ; Tue, 25 Jun 2019 10:38:15 -0400 (EDT) Received: by mail-pf1-f197.google.com with SMTP id j7so11996839pfn.10 for ; Tue, 25 Jun 2019 07:38:15 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:dkim-signature:from:to:cc:subject:date :message-id:in-reply-to:references:mime-version :content-transfer-encoding; bh=a9NQkpmukXy1lWC/LI5ypTEQqGqYwv/FPxR2noqbWvI=; b=Mis8LAr4TEDpkhPQ1SVtQeiLMx05/xAxVZrBvtXeNhsRkyrJKrcKA4N99zWQZ0P782 hjA2C3HJ/pHaXGgMxH/8BD57pXvN2thQ4O1eqvU/Z8fGIix+AUuPBvqzuQo42nzRktYm 7MqCbZoJGpT+f1tYrrDHgvmAjFCtjLvkSkDURW9mejJ8fFgyHJQ+qC+VRq0K1fpEHs+v AQSZsXJNUsQ4meZFlJGfp9u1z7UfWVVOWoqjbeaeRS/XY44BbE4aWYFRh7oisOTsvdLd X66MHI/4JIy/Kg4sUejp2mYdgeowLQbbq0sho6wusmFnXzj4fwPnCD3NXcNbUDqyEMim LhjA== X-Gm-Message-State: APjAAAX2EOwnNCx0ys4GSe8IfS5Js8ijl54dpMyPbUvspI+VCDGTrn7w nb/jyWqomn5O6LV7yFsd9n8fIZ+AsMBpRXIgJ4nZ6lESvSI66D2x9VcDuCrCb88tjCFgQIWm29o HgiGXyDrhgJ6iG7W72Q41RrOG7sPskbwCAR5BZ7gLUkDI9cdreYfMPpjjxM7sBjM= X-Received: by 2002:a17:90a:bc0c:: with SMTP id w12mr693963pjr.111.1561473494869; Tue, 25 Jun 2019 07:38:14 -0700 (PDT) X-Google-Smtp-Source: APXvYqxa7mvJVP/W/n8rKrdm90mo8zmhv7K+mA5i6blruWmyh+kyjFCqwWHWjs1BsyL0nCOc8+zD X-Received: by 2002:a17:90a:bc0c:: with SMTP id w12mr693849pjr.111.1561473493586; Tue, 25 Jun 2019 07:38:13 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1561473493; cv=none; d=google.com; s=arc-20160816; b=WbOFkHNNkZialoP0qkuIosVi0lTkF3pXuCUCTpHpWNUHU5F4WU8KzqZvWe6XdAcK3a Mr6/I73FdUlKpQtKCLsKj+G2KjgqrOK+CQRbY+F5UuV27E0xjw4v9v3To13OREeSosyA Pn82V39q3lWOUN/pM37OmgNIcJpTvGAP3uW/zKxY/VU1qhJNNM6SNR+bqXJB5BPBsWkr gjODwWjMlXjmwE47qXl7XZVBsO2kZZ61T1E81IhskceP5dFaPj5TAO7ExAN1QtOInrj/ sZZ04BqjrBMHGcgnLuGMzzNjmRj48R/hLeqUfKtgpqS3nsrXWK+z7ZzUOQFjbzsmFLHS x1wg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:dkim-signature; bh=a9NQkpmukXy1lWC/LI5ypTEQqGqYwv/FPxR2noqbWvI=; b=cOt/q49f05GTBulkaEZ6FriRbKfcfAIe7PJVJeTJ7rU21TSK5CsfHbnX2KI8R0qz1/ BgMj7L5C1Hco0hz06qVrhKahgku+e9HWoRRudNf4byJXkFQkedSVMRjDKjQNzi0lspoj rHjlkH0RW3djz6XAQt1txoactFb6TpTprKFOqkouEbl+i46f8azteydB8HHixdmjLh8B zad/C7LWz9jW6LrMICNJkUn5FuU/NEuxSNz6wq5OH8I5zG4AnimsPgBKLP6JbthCSsXe H30GAVssEsvlKRbQTQMpEEbnPI0HHAmy6s0bMhyxvVpyca6b84+3kt1aT4vxosnQKjBP 5USA== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@infradead.org header.s=bombadil.20170209 header.b=Gh3t9gEM; spf=pass (google.com: best guess record for domain of batv+c5155a46dc30cc8634d8+5784+infradead.org+hch@bombadil.srs.infradead.org designates 2607:7c80:54:e::133 as permitted sender) smtp.mailfrom=BATV+c5155a46dc30cc8634d8+5784+infradead.org+hch@bombadil.srs.infradead.org Received: from bombadil.infradead.org (bombadil.infradead.org. [2607:7c80:54:e::133]) by mx.google.com with ESMTPS id m22si13344881pgk.361.2019.06.25.07.38.13 for (version=TLS1_3 cipher=AEAD-AES256-GCM-SHA384 bits=256/256); Tue, 25 Jun 2019 07:38:13 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of batv+c5155a46dc30cc8634d8+5784+infradead.org+hch@bombadil.srs.infradead.org designates 2607:7c80:54:e::133 as permitted sender) client-ip=2607:7c80:54:e::133; Authentication-Results: mx.google.com; dkim=pass header.i=@infradead.org header.s=bombadil.20170209 header.b=Gh3t9gEM; spf=pass (google.com: best guess record for domain of batv+c5155a46dc30cc8634d8+5784+infradead.org+hch@bombadil.srs.infradead.org designates 2607:7c80:54:e::133 as permitted sender) smtp.mailfrom=BATV+c5155a46dc30cc8634d8+5784+infradead.org+hch@bombadil.srs.infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=bombadil.20170209; h=Content-Transfer-Encoding: MIME-Version:References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender :Reply-To:Content-Type:Content-ID:Content-Description:Resent-Date:Resent-From :Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Id:List-Help: List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive; bh=a9NQkpmukXy1lWC/LI5ypTEQqGqYwv/FPxR2noqbWvI=; b=Gh3t9gEMOW6SDRUUvXiEP7+/4L PLWxDogq/bQJmJEZDQIER0y9ehwsl5Otedae1zKqu84nl8L/27DqZhrM1xp+bQMcpQduYYZsYEJL4 hce1U06se+qhkRCmEo+Te29xinOTrMS4zfk7Sb3cvAi7Kop9gqzUijqVQP6dxi/ol8dKsa3wANDt9 AtLJmnFZGuiGC9A1UDO5h+0MDj+xYIpiqPP44h+vCRXpmJ+cQV0J9BgIAMqyn2JlN9IJ8HrBFTgu3 QmYu3C8I7KE2jOHIkrQanNqZNfwJyyCsGkyXqqgTzVVB4rCKH9aQKAUNoeEM8q7GM84whzOkUANgG W2E9f2ng==; Received: from 213-225-6-159.nat.highway.a1.net ([213.225.6.159] helo=localhost) by bombadil.infradead.org with esmtpsa (Exim 4.92 #3 (Red Hat Linux)) id 1hfma4-00083b-Sr; Tue, 25 Jun 2019 14:37:57 +0000 From: Christoph Hellwig To: Andrew Morton , Linus Torvalds , Paul Burton , James Hogan , Yoshinori Sato , Rich Felker , "David S. Miller" Cc: Nicholas Piggin , Khalid Aziz , Andrey Konovalov , Benjamin Herrenschmidt , Paul Mackerras , Michael Ellerman , linux-mips@vger.kernel.org, linux-sh@vger.kernel.org, sparclinux@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, linux-mm@kvack.org, x86@kernel.org, linux-kernel@vger.kernel.org Subject: [PATCH 12/16] mm: consolidate the get_user_pages* implementations Date: Tue, 25 Jun 2019 16:37:11 +0200 Message-Id: <20190625143715.1689-13-hch@lst.de> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20190625143715.1689-1-hch@lst.de> References: <20190625143715.1689-1-hch@lst.de> MIME-Version: 1.0 X-SRS-Rewrite: SMTP reverse-path rewritten from by bombadil.infradead.org. See http://www.infradead.org/rpr.html X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: X-Virus-Scanned: ClamAV using ClamSMTP Always build mm/gup.c so that we don't have to provide separate nommu stubs. Also merge the get_user_pages_fast and __get_user_pages_fast stubs when HAVE_FAST_GUP into the main implementations, which will never call the fast path if HAVE_FAST_GUP is not set. This also ensures the new put_user_pages* helpers are available for nommu, as those are currently missing, which would create a problem as soon as we actually grew users for it. Signed-off-by: Christoph Hellwig --- mm/Kconfig | 1 + mm/Makefile | 4 +-- mm/gup.c | 67 +++++++++++++++++++++++++++++++++++++--- mm/nommu.c | 88 ----------------------------------------------------- mm/util.c | 47 ---------------------------- 5 files changed, 65 insertions(+), 142 deletions(-) diff --git a/mm/Kconfig b/mm/Kconfig index 98dffb0f2447..5c41409557da 100644 --- a/mm/Kconfig +++ b/mm/Kconfig @@ -133,6 +133,7 @@ config HAVE_MEMBLOCK_PHYS_MAP bool config HAVE_FAST_GUP + depends on MMU bool config ARCH_KEEP_MEMBLOCK diff --git a/mm/Makefile b/mm/Makefile index ac5e5ba78874..dc0746ca1109 100644 --- a/mm/Makefile +++ b/mm/Makefile @@ -22,7 +22,7 @@ KCOV_INSTRUMENT_mmzone.o := n KCOV_INSTRUMENT_vmstat.o := n mmu-y := nommu.o -mmu-$(CONFIG_MMU) := gup.o highmem.o memory.o mincore.o \ +mmu-$(CONFIG_MMU) := highmem.o memory.o mincore.o \ mlock.o mmap.o mmu_gather.o mprotect.o mremap.o \ msync.o page_vma_mapped.o pagewalk.o \ pgtable-generic.o rmap.o vmalloc.o @@ -39,7 +39,7 @@ obj-y := filemap.o mempool.o oom_kill.o fadvise.o \ mm_init.o mmu_context.o percpu.o slab_common.o \ compaction.o vmacache.o \ interval_tree.o list_lru.o workingset.o \ - debug.o $(mmu-y) + debug.o gup.o $(mmu-y) # Give 'page_alloc' its own module-parameter namespace page-alloc-y := page_alloc.o diff --git a/mm/gup.c b/mm/gup.c index b29249581672..0e83dba98dfd 100644 --- a/mm/gup.c +++ b/mm/gup.c @@ -134,6 +134,7 @@ void put_user_pages(struct page **pages, unsigned long npages) } EXPORT_SYMBOL(put_user_pages); +#ifdef CONFIG_MMU static struct page *no_page_table(struct vm_area_struct *vma, unsigned int flags) { @@ -1322,6 +1323,51 @@ struct page *get_dump_page(unsigned long addr) return page; } #endif /* CONFIG_ELF_CORE */ +#else /* CONFIG_MMU */ +static long __get_user_pages_locked(struct task_struct *tsk, + struct mm_struct *mm, unsigned long start, + unsigned long nr_pages, struct page **pages, + struct vm_area_struct **vmas, int *locked, + unsigned int foll_flags) +{ + struct vm_area_struct *vma; + unsigned long vm_flags; + int i; + + /* calculate required read or write permissions. + * If FOLL_FORCE is set, we only require the "MAY" flags. + */ + vm_flags = (foll_flags & FOLL_WRITE) ? + (VM_WRITE | VM_MAYWRITE) : (VM_READ | VM_MAYREAD); + vm_flags &= (foll_flags & FOLL_FORCE) ? + (VM_MAYREAD | VM_MAYWRITE) : (VM_READ | VM_WRITE); + + for (i = 0; i < nr_pages; i++) { + vma = find_vma(mm, start); + if (!vma) + goto finish_or_fault; + + /* protect what we can, including chardevs */ + if ((vma->vm_flags & (VM_IO | VM_PFNMAP)) || + !(vm_flags & vma->vm_flags)) + goto finish_or_fault; + + if (pages) { + pages[i] = virt_to_page(start); + if (pages[i]) + get_page(pages[i]); + } + if (vmas) + vmas[i] = vma; + start = (start + PAGE_SIZE) & PAGE_MASK; + } + + return i; + +finish_or_fault: + return i ? : -EFAULT; +} +#endif /* !CONFIG_MMU */ #if defined(CONFIG_FS_DAX) || defined (CONFIG_CMA) static bool check_dax_vmas(struct vm_area_struct **vmas, long nr_pages) @@ -1484,7 +1530,7 @@ static long check_and_migrate_cma_pages(struct task_struct *tsk, { return nr_pages; } -#endif +#endif /* CONFIG_CMA */ /* * __gup_longterm_locked() is a wrapper for __get_user_pages_locked which @@ -2160,6 +2206,12 @@ static void gup_pgd_range(unsigned long addr, unsigned long end, return; } while (pgdp++, addr = next, addr != end); } +#else +static inline void gup_pgd_range(unsigned long addr, unsigned long end, + unsigned int flags, struct page **pages, int *nr) +{ +} +#endif /* CONFIG_HAVE_FAST_GUP */ #ifndef gup_fast_permitted /* @@ -2177,6 +2229,9 @@ static bool gup_fast_permitted(unsigned long start, unsigned long end) * the regular GUP. * Note a difference with get_user_pages_fast: this always returns the * number of pages pinned, 0 if no pages were pinned. + * + * If the architecture does not support this function, simply return with no + * pages pinned. */ int __get_user_pages_fast(unsigned long start, int nr_pages, int write, struct page **pages) @@ -2206,7 +2261,8 @@ int __get_user_pages_fast(unsigned long start, int nr_pages, int write, * block IPIs that come from THPs splitting. */ - if (gup_fast_permitted(start, end)) { + if (IS_ENABLED(CONFIG_HAVE_FAST_GUP) && + gup_fast_permitted(start, end)) { local_irq_save(flags); gup_pgd_range(start, end, write ? FOLL_WRITE : 0, pages, &nr); local_irq_restore(flags); @@ -2214,6 +2270,7 @@ int __get_user_pages_fast(unsigned long start, int nr_pages, int write, return nr; } +EXPORT_SYMBOL_GPL(__get_user_pages_fast); static int __gup_longterm_unlocked(unsigned long start, int nr_pages, unsigned int gup_flags, struct page **pages) @@ -2270,7 +2327,8 @@ int get_user_pages_fast(unsigned long start, int nr_pages, if (unlikely(!access_ok((void __user *)start, len))) return -EFAULT; - if (gup_fast_permitted(start, end)) { + if (IS_ENABLED(CONFIG_HAVE_FAST_GUP) && + gup_fast_permitted(start, end)) { local_irq_disable(); gup_pgd_range(addr, end, gup_flags, pages, &nr); local_irq_enable(); @@ -2296,5 +2354,4 @@ int get_user_pages_fast(unsigned long start, int nr_pages, return ret; } - -#endif /* CONFIG_HAVE_GENERIC_GUP */ +EXPORT_SYMBOL_GPL(get_user_pages_fast); diff --git a/mm/nommu.c b/mm/nommu.c index d8c02fbe03b5..07165ad2e548 100644 --- a/mm/nommu.c +++ b/mm/nommu.c @@ -111,94 +111,6 @@ unsigned int kobjsize(const void *objp) return PAGE_SIZE << compound_order(page); } -static long __get_user_pages(struct task_struct *tsk, struct mm_struct *mm, - unsigned long start, unsigned long nr_pages, - unsigned int foll_flags, struct page **pages, - struct vm_area_struct **vmas, int *nonblocking) -{ - struct vm_area_struct *vma; - unsigned long vm_flags; - int i; - - /* calculate required read or write permissions. - * If FOLL_FORCE is set, we only require the "MAY" flags. - */ - vm_flags = (foll_flags & FOLL_WRITE) ? - (VM_WRITE | VM_MAYWRITE) : (VM_READ | VM_MAYREAD); - vm_flags &= (foll_flags & FOLL_FORCE) ? - (VM_MAYREAD | VM_MAYWRITE) : (VM_READ | VM_WRITE); - - for (i = 0; i < nr_pages; i++) { - vma = find_vma(mm, start); - if (!vma) - goto finish_or_fault; - - /* protect what we can, including chardevs */ - if ((vma->vm_flags & (VM_IO | VM_PFNMAP)) || - !(vm_flags & vma->vm_flags)) - goto finish_or_fault; - - if (pages) { - pages[i] = virt_to_page(start); - if (pages[i]) - get_page(pages[i]); - } - if (vmas) - vmas[i] = vma; - start = (start + PAGE_SIZE) & PAGE_MASK; - } - - return i; - -finish_or_fault: - return i ? : -EFAULT; -} - -/* - * get a list of pages in an address range belonging to the specified process - * and indicate the VMA that covers each page - * - this is potentially dodgy as we may end incrementing the page count of a - * slab page or a secondary page from a compound page - * - don't permit access to VMAs that don't support it, such as I/O mappings - */ -long get_user_pages(unsigned long start, unsigned long nr_pages, - unsigned int gup_flags, struct page **pages, - struct vm_area_struct **vmas) -{ - return __get_user_pages(current, current->mm, start, nr_pages, - gup_flags, pages, vmas, NULL); -} -EXPORT_SYMBOL(get_user_pages); - -long get_user_pages_locked(unsigned long start, unsigned long nr_pages, - unsigned int gup_flags, struct page **pages, - int *locked) -{ - return get_user_pages(start, nr_pages, gup_flags, pages, NULL); -} -EXPORT_SYMBOL(get_user_pages_locked); - -static long __get_user_pages_unlocked(struct task_struct *tsk, - struct mm_struct *mm, unsigned long start, - unsigned long nr_pages, struct page **pages, - unsigned int gup_flags) -{ - long ret; - down_read(&mm->mmap_sem); - ret = __get_user_pages(tsk, mm, start, nr_pages, gup_flags, pages, - NULL, NULL); - up_read(&mm->mmap_sem); - return ret; -} - -long get_user_pages_unlocked(unsigned long start, unsigned long nr_pages, - struct page **pages, unsigned int gup_flags) -{ - return __get_user_pages_unlocked(current, current->mm, start, nr_pages, - pages, gup_flags); -} -EXPORT_SYMBOL(get_user_pages_unlocked); - /** * follow_pfn - look up PFN at a user virtual address * @vma: memory mapping diff --git a/mm/util.c b/mm/util.c index 9834c4ab7d8e..68575a315dc5 100644 --- a/mm/util.c +++ b/mm/util.c @@ -300,53 +300,6 @@ void arch_pick_mmap_layout(struct mm_struct *mm, struct rlimit *rlim_stack) } #endif -/* - * Like get_user_pages_fast() except its IRQ-safe in that it won't fall - * back to the regular GUP. - * Note a difference with get_user_pages_fast: this always returns the - * number of pages pinned, 0 if no pages were pinned. - * If the architecture does not support this function, simply return with no - * pages pinned. - */ -int __weak __get_user_pages_fast(unsigned long start, - int nr_pages, int write, struct page **pages) -{ - return 0; -} -EXPORT_SYMBOL_GPL(__get_user_pages_fast); - -/** - * get_user_pages_fast() - pin user pages in memory - * @start: starting user address - * @nr_pages: number of pages from start to pin - * @gup_flags: flags modifying pin behaviour - * @pages: array that receives pointers to the pages pinned. - * Should be at least nr_pages long. - * - * get_user_pages_fast provides equivalent functionality to get_user_pages, - * operating on current and current->mm, with force=0 and vma=NULL. However - * unlike get_user_pages, it must be called without mmap_sem held. - * - * get_user_pages_fast may take mmap_sem and page table locks, so no - * assumptions can be made about lack of locking. get_user_pages_fast is to be - * implemented in a way that is advantageous (vs get_user_pages()) when the - * user memory area is already faulted in and present in ptes. However if the - * pages have to be faulted in, it may turn out to be slightly slower so - * callers need to carefully consider what to use. On many architectures, - * get_user_pages_fast simply falls back to get_user_pages. - * - * Return: number of pages pinned. This may be fewer than the number - * requested. If nr_pages is 0 or negative, returns 0. If no pages - * were pinned, returns -errno. - */ -int __weak get_user_pages_fast(unsigned long start, - int nr_pages, unsigned int gup_flags, - struct page **pages) -{ - return get_user_pages_unlocked(start, nr_pages, pages, gup_flags); -} -EXPORT_SYMBOL_GPL(get_user_pages_fast); - unsigned long vm_mmap_pgoff(struct file *file, unsigned long addr, unsigned long len, unsigned long prot, unsigned long flag, unsigned long pgoff) From patchwork Tue Jun 25 14:37:12 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Christoph Hellwig X-Patchwork-Id: 11015715 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 987651932 for ; Tue, 25 Jun 2019 14:38:34 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 8B2A3283EE for ; Tue, 25 Jun 2019 14:38:34 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 7F1BA2863C; Tue, 25 Jun 2019 14:38:34 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-2.7 required=2.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,MAILING_LIST_MULTI,RCVD_IN_DNSWL_NONE autolearn=unavailable version=3.3.1 Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 2E80B28640 for ; Tue, 25 Jun 2019 14:38:34 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 0B3A48E000E; Tue, 25 Jun 2019 10:38:16 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id F2DE88E000B; Tue, 25 Jun 2019 10:38:15 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id B870B8E000F; Tue, 25 Jun 2019 10:38:15 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from mail-pg1-f199.google.com (mail-pg1-f199.google.com [209.85.215.199]) by kanga.kvack.org (Postfix) with ESMTP id 6B0088E000E for ; Tue, 25 Jun 2019 10:38:15 -0400 (EDT) Received: by mail-pg1-f199.google.com with SMTP id x3so11786306pgp.8 for ; Tue, 25 Jun 2019 07:38:15 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:dkim-signature:from:to:cc:subject:date :message-id:in-reply-to:references:mime-version :content-transfer-encoding; bh=BYP6NRhV0e06f8oIenPFc4bRzhQelZlIe9CTAxSMrks=; b=YyVNLxRCxXyUve+JfJUDmSLwm4MGo52DazFaju1CMuWwRhAk4WxA+095lJ9k4ZWcir GHXqldvDa2oaMp7dz9SdkES36lpqLJ9L9Etj1A0ddSSR4S7RpnxKVlYLky434d6IGC8f zTGVriH9DL6hsTE6FIyFQMv8TvxLvNlezL5Riicz36DEfwziMGEKOB0MY7l6UyD39lWz dU2IAX+8rjWHCY0KjJ0dtgq9b1kR7gbX6+gxcRVCDH7qtCgA14tl3XS/4WtMy202G5nL 9ZC5eI0ScrhQI1T+JO/VnJOE9sl3vdtTX0t7ASniuRhbvVpF9aVcrDPDYgDXjlL5h2Ox /+kg== X-Gm-Message-State: APjAAAUUgPrckfX9VIlJM5sYk7i0/8UUc7X2y34Rn3i2bImRNL87ociV wlPFdhNSH4emSku3lubO+km/i2Ho8M3DBzN8aSct1yDuIiPfejFiWpll6E89qIyarFZstf7Yn3A i0k0mmHH+3uA3zYlzfdjnfQY8IxpVTyHyrVjIuXuksUpk9TrKPTQybhLQuwiINrI= X-Received: by 2002:a65:5c8c:: with SMTP id a12mr8033738pgt.255.1561473494918; Tue, 25 Jun 2019 07:38:14 -0700 (PDT) X-Google-Smtp-Source: APXvYqx2VWKIzSxdVJbRd/VrY2Y6yJTXFfbT/vRFv9rZGegPGmIzdQ4DQ/JiiQLDshYcGoBUIo67 X-Received: by 2002:a65:5c8c:: with SMTP id a12mr8033691pgt.255.1561473494198; Tue, 25 Jun 2019 07:38:14 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1561473494; cv=none; d=google.com; s=arc-20160816; b=EGGq8pqZNOkCi9z3Smb3hmcPjs8tQSyMtMroetKXrCXGgK71lMZvfnKQ8gC1dHz/H/ 7WsLdU9ks0gfrWbMSLSXxaIeuL4SWHYNuupD1NlKKnBmh5GiqL5dlLItBpNWFx1G+D/Y FlTnwoDDFpxgVcasvC2H/s+tAkxTeUw2xtnhtURdwTrWuDWRTu8k3H5ioMhJqj5j5SIb eJrQA/EzJjUljYeYQ7x8aIwz2Yz9mDWp6pqkThDLWToB3ZMj8Nyy+5bE/f9hE0r1xGng 3XIShCJoXPnK0ffQw6dcFz5p+EGbchGmnVqK4KDEvpyfJu1C7AOL/nb4dNw0gXEdJ4qn Z8cA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:dkim-signature; bh=BYP6NRhV0e06f8oIenPFc4bRzhQelZlIe9CTAxSMrks=; b=y6iC65xEHb7v+fo+5zW8CNNUqOysBtBezsc8KQDgRMAslWeWRHQRojDxR/AUaYkGhO Gu32gotJnSHq6NYA7UMw4AOqKsYbgDJ2zZIwQ4zYB6Sr0DD8yLrn2IJNN0I1lzPpIkxB PElxkixmxSdNTYTxDVN5eFOVEPJhRZImFK/I1KTWJoidZ3UX5RNpEletSf/P8I3wQ2Lr UtOfZRjxKRHFvPrAhAhZOnnUjROzXMBxnhoxlm1YXNyRzE/hVNfnCrAccBRasMfdIznG OiaGFABJInCDLmQfetfXRZGG45bIkfhUcn9417y+G+m6KFy3mzgpeZZWT81ZDoK3dEnU x/TA== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@infradead.org header.s=bombadil.20170209 header.b=uFiSP+ha; spf=pass (google.com: best guess record for domain of batv+c5155a46dc30cc8634d8+5784+infradead.org+hch@bombadil.srs.infradead.org designates 2607:7c80:54:e::133 as permitted sender) smtp.mailfrom=BATV+c5155a46dc30cc8634d8+5784+infradead.org+hch@bombadil.srs.infradead.org Received: from bombadil.infradead.org (bombadil.infradead.org. [2607:7c80:54:e::133]) by mx.google.com with ESMTPS id w10si15301533pfq.115.2019.06.25.07.38.14 for (version=TLS1_3 cipher=AEAD-AES256-GCM-SHA384 bits=256/256); Tue, 25 Jun 2019 07:38:14 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of batv+c5155a46dc30cc8634d8+5784+infradead.org+hch@bombadil.srs.infradead.org designates 2607:7c80:54:e::133 as permitted sender) client-ip=2607:7c80:54:e::133; Authentication-Results: mx.google.com; dkim=pass header.i=@infradead.org header.s=bombadil.20170209 header.b=uFiSP+ha; spf=pass (google.com: best guess record for domain of batv+c5155a46dc30cc8634d8+5784+infradead.org+hch@bombadil.srs.infradead.org designates 2607:7c80:54:e::133 as permitted sender) smtp.mailfrom=BATV+c5155a46dc30cc8634d8+5784+infradead.org+hch@bombadil.srs.infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=bombadil.20170209; h=Content-Transfer-Encoding: MIME-Version:References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender :Reply-To:Content-Type:Content-ID:Content-Description:Resent-Date:Resent-From :Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Id:List-Help: List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive; bh=BYP6NRhV0e06f8oIenPFc4bRzhQelZlIe9CTAxSMrks=; b=uFiSP+haMlQEluFE+XcjFUp0bx Xv3+J5Oj3rSXey8kF+/LAwYTXrEwIJB+jJl+6r03zjdzwjVRq+VPnCinGI1fmoEMogJSdLZckr9AW XAp23xNpzF2yS1/kPcFDnnRqaX+7wXBwe/7T4cRzngX6ulUf8SAAgu5OVQ2jcWGEuxLz1SDp37tGa kOGooG8awv44vL8CYgScJv4OW0UHallUueZhZqIE8tpFKyw8XyatY4V8eI8toPbxmMkYCrMZhNBtv 3mDNcF2l8cOo+TOQPz1NHnnGn6/ghuBcsA9XRX5JSZ+n8GsK1rDzd7uLbuD+TO1Y3ycQD7r5lehfI DSTaEKdA==; Received: from 213-225-6-159.nat.highway.a1.net ([213.225.6.159] helo=localhost) by bombadil.infradead.org with esmtpsa (Exim 4.92 #3 (Red Hat Linux)) id 1hfma8-000858-8p; Tue, 25 Jun 2019 14:38:00 +0000 From: Christoph Hellwig To: Andrew Morton , Linus Torvalds , Paul Burton , James Hogan , Yoshinori Sato , Rich Felker , "David S. Miller" Cc: Nicholas Piggin , Khalid Aziz , Andrey Konovalov , Benjamin Herrenschmidt , Paul Mackerras , Michael Ellerman , linux-mips@vger.kernel.org, linux-sh@vger.kernel.org, sparclinux@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, linux-mm@kvack.org, x86@kernel.org, linux-kernel@vger.kernel.org Subject: [PATCH 13/16] mm: validate get_user_pages_fast flags Date: Tue, 25 Jun 2019 16:37:12 +0200 Message-Id: <20190625143715.1689-14-hch@lst.de> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20190625143715.1689-1-hch@lst.de> References: <20190625143715.1689-1-hch@lst.de> MIME-Version: 1.0 X-SRS-Rewrite: SMTP reverse-path rewritten from by bombadil.infradead.org. See http://www.infradead.org/rpr.html X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: X-Virus-Scanned: ClamAV using ClamSMTP We can only deal with FOLL_WRITE and/or FOLL_LONGTERM in get_user_pages_fast, so reject all other flags. Signed-off-by: Christoph Hellwig --- mm/gup.c | 3 +++ 1 file changed, 3 insertions(+) diff --git a/mm/gup.c b/mm/gup.c index 0e83dba98dfd..37a2083b1ed8 100644 --- a/mm/gup.c +++ b/mm/gup.c @@ -2317,6 +2317,9 @@ int get_user_pages_fast(unsigned long start, int nr_pages, unsigned long addr, len, end; int nr = 0, ret = 0; + if (WARN_ON_ONCE(gup_flags & ~(FOLL_WRITE | FOLL_LONGTERM))) + return -EINVAL; + start = untagged_addr(start) & PAGE_MASK; addr = start; len = (unsigned long) nr_pages << PAGE_SHIFT; From patchwork Tue Jun 25 14:37:13 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Christoph Hellwig X-Patchwork-Id: 11015725 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 1457D112C for ; Tue, 25 Jun 2019 14:38:39 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 06D61283EE for ; Tue, 25 Jun 2019 14:38:39 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id EECA328640; Tue, 25 Jun 2019 14:38:38 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-2.7 required=2.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,MAILING_LIST_MULTI,RCVD_IN_DNSWL_NONE autolearn=unavailable version=3.3.1 Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 4347F283EE for ; Tue, 25 Jun 2019 14:38:38 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 7681C8E000F; Tue, 25 Jun 2019 10:38:18 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 6CA218E000B; Tue, 25 Jun 2019 10:38:18 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 543098E000F; Tue, 25 Jun 2019 10:38:18 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from mail-pg1-f197.google.com (mail-pg1-f197.google.com [209.85.215.197]) by kanga.kvack.org (Postfix) with ESMTP id 152648E000B for ; Tue, 25 Jun 2019 10:38:18 -0400 (EDT) Received: by mail-pg1-f197.google.com with SMTP id z35so8259075pgl.17 for ; Tue, 25 Jun 2019 07:38:18 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:dkim-signature:from:to:cc:subject:date :message-id:in-reply-to:references:mime-version :content-transfer-encoding; bh=nhYNp2rhSSMYWrdeYfaQg9xtCg/tCYGSAHXqYjk4fcU=; b=s0e7+lZbntH1YK4Q26GZmaKSvE+P0kGtCF3ipvhyuRv18t6K2bzsxAzvl9RZf0m6fl b/BfsqF0cPHJ0KHry/rIScjSkY4qlHM1ERaoSy7IjJhNR5hgSZWg/tvqxKLIZp1E/YAz mWkbx2hrk+qJH1CNnHE/P8apZdklcs3Bg+B9jxQxMCOy86Ezmkn6+g4eFPdOlS1Fvpvr bX/6vAxAmOAZm22o5HF+R31HwybTVvoB68hQ0e1/cnTpI+mfCa9aVp3kG1MnzxWFAi50 e5fChNmvp3YVYASyaiu2Vqsqhc+eZm6jlPZnftUSbeL5Ncez//UeJIncu1Vw8bk8VBRi BxmA== X-Gm-Message-State: APjAAAXyJCMMyLj4+xicPSHAbO/jvXwdj1Xv3a2t2ehhIqp1gg1rqcpB i6lhnt3G+UKtd+URTVBi0fgBk8p7tanzNaDxTmi8d9WO76OyIWjjFECwf84mZwbBHNZYkQe0Lpj wo1t1ksGcyIKNHFxKFc1T/MgaISwJK47CfYfJCeiGZYDcr18E4CfwT2ZvbhNQDJE= X-Received: by 2002:a17:902:30a3:: with SMTP id v32mr156644815plb.6.1561473497755; Tue, 25 Jun 2019 07:38:17 -0700 (PDT) X-Google-Smtp-Source: APXvYqx9BaVhAdl2opK9oMYkJxlAXtJIg/Hu2hkHVDbaCI41tauMrO0CHoP4JH1avn+0pPh881bo X-Received: by 2002:a17:902:30a3:: with SMTP id v32mr156644721plb.6.1561473496848; Tue, 25 Jun 2019 07:38:16 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1561473496; cv=none; d=google.com; s=arc-20160816; b=FaNcc2OpqK1RMr8SZ7wsmHYnL+duWhnhyPSK28ZLF5n6lt6lRMHvDcDIUBzUp3Ryad BTp3NNDn1zWrtqhWXqUmg7KRl8LtHLDjn42cvcqcdR/M3Ahg2A1hp0Vtd9sVMfMd8Y/L lXIC4D461IF8Pvl8xr38FXPd9huMS0YSXttDBfJ6MjV2qPwJ8E/UmAAQ93Cp5bxx8qJH HqHWQ1A784iAowwZD+OBc0JEt5mus/RY/yXyZ2l/uILMYNq+T5BASdQTVBNQdVRw4Tr+ UxKgOwV/wBbTR4lqAUXBDBs4lMJwknHhzazwWmqBZjqlmO8bCUK4TvlC5dVpBG2ghfFm RtBQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:dkim-signature; bh=nhYNp2rhSSMYWrdeYfaQg9xtCg/tCYGSAHXqYjk4fcU=; b=qCZZV5BEdTvX6dx9c6ZTN1cZevXD4Hbtuuoso+VK9R9ZK/P68GAdVAtAfA2fkaQ0eX +BsZ9ttYbc20D1mVbNRcMDLEKkZuW2YEvtz+u3uhlvfXd57LNZmECeLa2gsgc9YrdUim XsaEV1RDdrpLEJnaA4R44vwJsz7bxR4xYa2IfGdkLk6iXlP1iseTBo5YlWa9zkfLGLOS VQfdIJiBLRoj2CvMA6QMupJ04GzvGyhTMKqmMZBYnu6XBZcrwo8t6fnDZH5el2jYIPQn zFzh06L/48EmyN6wSiCEP9TOtvuXwlY1IpfR7sTEMJFunG1Nevopcc/5O9w7gV6IcYQH PhcQ== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@infradead.org header.s=bombadil.20170209 header.b=Voixl5wJ; spf=pass (google.com: best guess record for domain of batv+c5155a46dc30cc8634d8+5784+infradead.org+hch@bombadil.srs.infradead.org designates 2607:7c80:54:e::133 as permitted sender) smtp.mailfrom=BATV+c5155a46dc30cc8634d8+5784+infradead.org+hch@bombadil.srs.infradead.org Received: from bombadil.infradead.org (bombadil.infradead.org. [2607:7c80:54:e::133]) by mx.google.com with ESMTPS id k4si12596551pgq.293.2019.06.25.07.38.16 for (version=TLS1_3 cipher=AEAD-AES256-GCM-SHA384 bits=256/256); Tue, 25 Jun 2019 07:38:16 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of batv+c5155a46dc30cc8634d8+5784+infradead.org+hch@bombadil.srs.infradead.org designates 2607:7c80:54:e::133 as permitted sender) client-ip=2607:7c80:54:e::133; Authentication-Results: mx.google.com; dkim=pass header.i=@infradead.org header.s=bombadil.20170209 header.b=Voixl5wJ; spf=pass (google.com: best guess record for domain of batv+c5155a46dc30cc8634d8+5784+infradead.org+hch@bombadil.srs.infradead.org designates 2607:7c80:54:e::133 as permitted sender) smtp.mailfrom=BATV+c5155a46dc30cc8634d8+5784+infradead.org+hch@bombadil.srs.infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=bombadil.20170209; h=Content-Transfer-Encoding: MIME-Version:References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender :Reply-To:Content-Type:Content-ID:Content-Description:Resent-Date:Resent-From :Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Id:List-Help: List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive; bh=nhYNp2rhSSMYWrdeYfaQg9xtCg/tCYGSAHXqYjk4fcU=; b=Voixl5wJryX6fnOcp1CrbyiJxt svVwJ6IVf1Zg9nS0a8eQaOFx2NiEThIjQNDbCBJfjtCBqp69jRTpmqtm47ZfIM/ml/dreqYANF5Km CiiFPYBjHfbc1h+Wh1MG6O6XayGPhs8aR3X2t+eks1uA8MAaK2irFJkU4/p/AU22qh5JYOhiO41sp OSis/Ri1OoSLC3Ze5PeoJ7a1d3oj/RgEkuVs02sYa4KCLru/bHe1zpvH5jWqWoMrHBV1Ht8p8em7c u/QvzbfzbAONII0CN0vqJMYUA9EKSmd1YW6p22xcI55o3GRvVKUTxyJIh1BNF3T6Wxroel8kNaPlM 4pr+FgJA==; Received: from 213-225-6-159.nat.highway.a1.net ([213.225.6.159] helo=localhost) by bombadil.infradead.org with esmtpsa (Exim 4.92 #3 (Red Hat Linux)) id 1hfmaB-00086r-Ed; Tue, 25 Jun 2019 14:38:04 +0000 From: Christoph Hellwig To: Andrew Morton , Linus Torvalds , Paul Burton , James Hogan , Yoshinori Sato , Rich Felker , "David S. Miller" Cc: Nicholas Piggin , Khalid Aziz , Andrey Konovalov , Benjamin Herrenschmidt , Paul Mackerras , Michael Ellerman , linux-mips@vger.kernel.org, linux-sh@vger.kernel.org, sparclinux@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, linux-mm@kvack.org, x86@kernel.org, linux-kernel@vger.kernel.org Subject: [PATCH 14/16] mm: move the powerpc hugepd code to mm/gup.c Date: Tue, 25 Jun 2019 16:37:13 +0200 Message-Id: <20190625143715.1689-15-hch@lst.de> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20190625143715.1689-1-hch@lst.de> References: <20190625143715.1689-1-hch@lst.de> MIME-Version: 1.0 X-SRS-Rewrite: SMTP reverse-path rewritten from by bombadil.infradead.org. See http://www.infradead.org/rpr.html X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: X-Virus-Scanned: ClamAV using ClamSMTP While only powerpc supports the hugepd case, the code is pretty generic and I'd like to keep all GUP internals in one place. Signed-off-by: Christoph Hellwig --- arch/powerpc/Kconfig | 1 + arch/powerpc/mm/hugetlbpage.c | 72 ------------------------------ include/linux/hugetlb.h | 18 -------- mm/Kconfig | 10 +++++ mm/gup.c | 82 +++++++++++++++++++++++++++++++++++ 5 files changed, 93 insertions(+), 90 deletions(-) diff --git a/arch/powerpc/Kconfig b/arch/powerpc/Kconfig index 992a04796e56..4f1b00979cde 100644 --- a/arch/powerpc/Kconfig +++ b/arch/powerpc/Kconfig @@ -125,6 +125,7 @@ config PPC select ARCH_HAS_FORTIFY_SOURCE select ARCH_HAS_GCOV_PROFILE_ALL select ARCH_HAS_KCOV + select ARCH_HAS_HUGEPD if HUGETLB_PAGE select ARCH_HAS_MMIOWB if PPC64 select ARCH_HAS_PHYS_TO_DMA select ARCH_HAS_PMEM_API if PPC64 diff --git a/arch/powerpc/mm/hugetlbpage.c b/arch/powerpc/mm/hugetlbpage.c index b5d92dc32844..51716c11d0fb 100644 --- a/arch/powerpc/mm/hugetlbpage.c +++ b/arch/powerpc/mm/hugetlbpage.c @@ -511,13 +511,6 @@ struct page *follow_huge_pd(struct vm_area_struct *vma, return page; } -static unsigned long hugepte_addr_end(unsigned long addr, unsigned long end, - unsigned long sz) -{ - unsigned long __boundary = (addr + sz) & ~(sz-1); - return (__boundary - 1 < end - 1) ? __boundary : end; -} - #ifdef CONFIG_PPC_MM_SLICES unsigned long hugetlb_get_unmapped_area(struct file *file, unsigned long addr, unsigned long len, unsigned long pgoff, @@ -665,68 +658,3 @@ void flush_dcache_icache_hugepage(struct page *page) } } } - -static int gup_hugepte(pte_t *ptep, unsigned long sz, unsigned long addr, - unsigned long end, int write, struct page **pages, int *nr) -{ - unsigned long pte_end; - struct page *head, *page; - pte_t pte; - int refs; - - pte_end = (addr + sz) & ~(sz-1); - if (pte_end < end) - end = pte_end; - - pte = READ_ONCE(*ptep); - - if (!pte_access_permitted(pte, write)) - return 0; - - /* hugepages are never "special" */ - VM_BUG_ON(!pfn_valid(pte_pfn(pte))); - - refs = 0; - head = pte_page(pte); - - page = head + ((addr & (sz-1)) >> PAGE_SHIFT); - do { - VM_BUG_ON(compound_head(page) != head); - pages[*nr] = page; - (*nr)++; - page++; - refs++; - } while (addr += PAGE_SIZE, addr != end); - - if (!page_cache_add_speculative(head, refs)) { - *nr -= refs; - return 0; - } - - if (unlikely(pte_val(pte) != pte_val(*ptep))) { - /* Could be optimized better */ - *nr -= refs; - while (refs--) - put_page(head); - return 0; - } - - return 1; -} - -int gup_huge_pd(hugepd_t hugepd, unsigned long addr, unsigned int pdshift, - unsigned long end, int write, struct page **pages, int *nr) -{ - pte_t *ptep; - unsigned long sz = 1UL << hugepd_shift(hugepd); - unsigned long next; - - ptep = hugepte_offset(hugepd, addr, pdshift); - do { - next = hugepte_addr_end(addr, end, sz); - if (!gup_hugepte(ptep, sz, addr, end, write, pages, nr)) - return 0; - } while (ptep++, addr = next, addr != end); - - return 1; -} diff --git a/include/linux/hugetlb.h b/include/linux/hugetlb.h index edf476c8cfb9..0f91761e2c53 100644 --- a/include/linux/hugetlb.h +++ b/include/linux/hugetlb.h @@ -16,29 +16,11 @@ struct user_struct; struct mmu_gather; #ifndef is_hugepd -/* - * Some architectures requires a hugepage directory format that is - * required to support multiple hugepage sizes. For example - * a4fe3ce76 "powerpc/mm: Allow more flexible layouts for hugepage pagetables" - * introduced the same on powerpc. This allows for a more flexible hugepage - * pagetable layout. - */ typedef struct { unsigned long pd; } hugepd_t; #define is_hugepd(hugepd) (0) #define __hugepd(x) ((hugepd_t) { (x) }) -static inline int gup_huge_pd(hugepd_t hugepd, unsigned long addr, - unsigned pdshift, unsigned long end, - int write, struct page **pages, int *nr) -{ - return 0; -} -#else -extern int gup_huge_pd(hugepd_t hugepd, unsigned long addr, - unsigned pdshift, unsigned long end, - int write, struct page **pages, int *nr); #endif - #ifdef CONFIG_HUGETLB_PAGE #include diff --git a/mm/Kconfig b/mm/Kconfig index 5c41409557da..44be3f01a2b2 100644 --- a/mm/Kconfig +++ b/mm/Kconfig @@ -769,4 +769,14 @@ config GUP_GET_PTE_LOW_HIGH config ARCH_HAS_PTE_SPECIAL bool +# +# Some architectures require a special hugepage directory format that is +# required to support multiple hugepage sizes. For example a4fe3ce76 +# "powerpc/mm: Allow more flexible layouts for hugepage pagetables" +# introduced it on powerpc. This allows for a more flexible hugepage +# pagetable layouts. +# +config ARCH_HAS_HUGEPD + bool + endmenu diff --git a/mm/gup.c b/mm/gup.c index 37a2083b1ed8..7077549bc8ea 100644 --- a/mm/gup.c +++ b/mm/gup.c @@ -1966,6 +1966,88 @@ static int __gup_device_huge_pud(pud_t pud, pud_t *pudp, unsigned long addr, } #endif +#ifdef CONFIG_ARCH_HAS_HUGEPD +static unsigned long hugepte_addr_end(unsigned long addr, unsigned long end, + unsigned long sz) +{ + unsigned long __boundary = (addr + sz) & ~(sz-1); + return (__boundary - 1 < end - 1) ? __boundary : end; +} + +static int gup_hugepte(pte_t *ptep, unsigned long sz, unsigned long addr, + unsigned long end, int write, struct page **pages, int *nr) +{ + unsigned long pte_end; + struct page *head, *page; + pte_t pte; + int refs; + + pte_end = (addr + sz) & ~(sz-1); + if (pte_end < end) + end = pte_end; + + pte = READ_ONCE(*ptep); + + if (!pte_access_permitted(pte, write)) + return 0; + + /* hugepages are never "special" */ + VM_BUG_ON(!pfn_valid(pte_pfn(pte))); + + refs = 0; + head = pte_page(pte); + + page = head + ((addr & (sz-1)) >> PAGE_SHIFT); + do { + VM_BUG_ON(compound_head(page) != head); + pages[*nr] = page; + (*nr)++; + page++; + refs++; + } while (addr += PAGE_SIZE, addr != end); + + if (!page_cache_add_speculative(head, refs)) { + *nr -= refs; + return 0; + } + + if (unlikely(pte_val(pte) != pte_val(*ptep))) { + /* Could be optimized better */ + *nr -= refs; + while (refs--) + put_page(head); + return 0; + } + + return 1; +} + +static int gup_huge_pd(hugepd_t hugepd, unsigned long addr, + unsigned int pdshift, unsigned long end, int write, + struct page **pages, int *nr) +{ + pte_t *ptep; + unsigned long sz = 1UL << hugepd_shift(hugepd); + unsigned long next; + + ptep = hugepte_offset(hugepd, addr, pdshift); + do { + next = hugepte_addr_end(addr, end, sz); + if (!gup_hugepte(ptep, sz, addr, end, write, pages, nr)) + return 0; + } while (ptep++, addr = next, addr != end); + + return 1; +} +#else +static inline int gup_huge_pd(hugepd_t hugepd, unsigned long addr, + unsigned pdshift, unsigned long end, int write, + struct page **pages, int *nr) +{ + return 0; +} +#endif /* CONFIG_ARCH_HAS_HUGEPD */ + static int gup_huge_pmd(pmd_t orig, pmd_t *pmdp, unsigned long addr, unsigned long end, unsigned int flags, struct page **pages, int *nr) { From patchwork Tue Jun 25 14:37:14 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Christoph Hellwig X-Patchwork-Id: 11015731 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id CB2E31908 for ; Tue, 25 Jun 2019 14:38:42 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id BDF1D2863C for ; Tue, 25 Jun 2019 14:38:42 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id B1A7A283EE; Tue, 25 Jun 2019 14:38:42 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-2.7 required=2.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,MAILING_LIST_MULTI,RCVD_IN_DNSWL_NONE autolearn=unavailable version=3.3.1 Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 4FF3528640 for ; Tue, 25 Jun 2019 14:38:42 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 8C67D8E0011; Tue, 25 Jun 2019 10:38:27 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 851768E000B; Tue, 25 Jun 2019 10:38:27 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 6F02B8E0011; Tue, 25 Jun 2019 10:38:27 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from mail-pf1-f199.google.com (mail-pf1-f199.google.com [209.85.210.199]) by kanga.kvack.org (Postfix) with ESMTP id 2C9048E000B for ; Tue, 25 Jun 2019 10:38:27 -0400 (EDT) Received: by mail-pf1-f199.google.com with SMTP id f25so11960437pfk.14 for ; Tue, 25 Jun 2019 07:38:27 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:dkim-signature:from:to:cc:subject:date :message-id:in-reply-to:references:mime-version :content-transfer-encoding; bh=IRyX7WezlAVDVbSdXLwEQAHdOH3yDAfA0drmqFGnM/g=; b=PuXdHkEAmT1VXcLxsFSohBEoNsbBOONHRWrIIKpM5chG8I2EgUWuaDxNHvkHvl7+yB zm02hicNS+Y6qnGdgycR4zDHLlPTGkHQ++SlSsAekkOKpFPPO4nYH0AVhC9y9i3hwgpS P2rISxyN2BPmzRncQMEiDEMDu4Kp3K8QeidRoIhmncbW8FOnNrWoNO21P6a7duCCeEus fyzUbLl/GiotnNJH6cUxuMTpdAZ3uUvN7OnQqzlUzTOcoAE1SRn8shzEAJewRSJNllTO HQj0TGQhcOnEwL4bSD/ofCRaiNo0MKmNEz6CziR/vYsE1UUGH8SMkQxasWiilgHtVfN0 +Chw== X-Gm-Message-State: APjAAAWdvOt0NFq5SejNiXopk6aNzqrwR6vT0wb2mZnYO838mYLyF9ED bBWMPfISbebvKW3hqkaXn/5TtoE6B8r092UzPwBU2djKbxpy/Mu5jYXK3J8FknX+42iq2Nrla/f SNNGB4HYqUvjq6+pu0xiCgjpAoK0b0QdMhkLH9wZTWZjJSaVWLiAt9ZNfT8nydM0= X-Received: by 2002:a65:51cb:: with SMTP id i11mr38033455pgq.390.1561473506725; Tue, 25 Jun 2019 07:38:26 -0700 (PDT) X-Google-Smtp-Source: APXvYqx8PFi0ZmobjpD/JSGw0Iyh5tq64ZoUxLbxWXR4Q/u9kHDIRyQSKTk8qES2N2APSU72CyAd X-Received: by 2002:a65:51cb:: with SMTP id i11mr38033395pgq.390.1561473505872; Tue, 25 Jun 2019 07:38:25 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1561473505; cv=none; d=google.com; s=arc-20160816; b=GlSRkL/v+t+/SfVkpnb+o/j4aYWK555dYCEA+ui9BrxjlqCqg4rp+tTrU2xgec1mZ3 Q2R/p4S7Yu5xW8boj/E7EJJpYi9WsRIID9wNq5ooKEFhTX68lr0NCyNToiibBcgkCoAF rK3FQvgZkfGGrUs+N6J8xWbvQr6Gj6HLTT3bE2o+zqxGROOiYUAyjZ4Dh/i16mUVv/Qt Y0ppLkh1YagMtZBjPocrNHHafz3pLYattVELruKvQeEe4g+D9KQaRhReMjvX3F/dPG+A 9GAqK1KnaFgkHSLPNZM6b4JYpUEsIjNjp5iq60fs7qe8ZKFm+OecsL92cAZyFiRiFxZp 338A== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:dkim-signature; bh=IRyX7WezlAVDVbSdXLwEQAHdOH3yDAfA0drmqFGnM/g=; b=BsdWRyPnf9f4/54mXBJ1lcAY3Q08c0atyC8VR+O05TPJOKRJdJDr5V6YMgjQy5LdNr FJjje5bexw6/93VKAlsHBGs+9HpzMYUgBHEPFIIAuQ1cbk2AZC6sfX1n0/s9sSiY6LWR qtB8n1BMh2TaOkMRnfIeySHmeUlBzEjI/ScwmLDBPnRHv3Jx5PFxF1Q7Om/pukAQ93iA xFX3kDNA9lGdy4q5t6b/e0DaltXYHEnt0lBC8tsrTFLsBhchiyRH9gJjdAoHl8YsVBVv l9cs31xixQsBvH6zuzjLZBF0eG9Eg+7wplCj/6MviwEGZm5CuksD6lYzh2e3+VKp8vWr 91UQ== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@infradead.org header.s=bombadil.20170209 header.b=mx5wCxAm; spf=pass (google.com: best guess record for domain of batv+c5155a46dc30cc8634d8+5784+infradead.org+hch@bombadil.srs.infradead.org designates 2607:7c80:54:e::133 as permitted sender) smtp.mailfrom=BATV+c5155a46dc30cc8634d8+5784+infradead.org+hch@bombadil.srs.infradead.org Received: from bombadil.infradead.org (bombadil.infradead.org. [2607:7c80:54:e::133]) by mx.google.com with ESMTPS id k1si13240754pgo.574.2019.06.25.07.38.25 for (version=TLS1_3 cipher=AEAD-AES256-GCM-SHA384 bits=256/256); Tue, 25 Jun 2019 07:38:25 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of batv+c5155a46dc30cc8634d8+5784+infradead.org+hch@bombadil.srs.infradead.org designates 2607:7c80:54:e::133 as permitted sender) client-ip=2607:7c80:54:e::133; Authentication-Results: mx.google.com; dkim=pass header.i=@infradead.org header.s=bombadil.20170209 header.b=mx5wCxAm; spf=pass (google.com: best guess record for domain of batv+c5155a46dc30cc8634d8+5784+infradead.org+hch@bombadil.srs.infradead.org designates 2607:7c80:54:e::133 as permitted sender) smtp.mailfrom=BATV+c5155a46dc30cc8634d8+5784+infradead.org+hch@bombadil.srs.infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=bombadil.20170209; h=Content-Transfer-Encoding: MIME-Version:References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender :Reply-To:Content-Type:Content-ID:Content-Description:Resent-Date:Resent-From :Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Id:List-Help: List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive; bh=IRyX7WezlAVDVbSdXLwEQAHdOH3yDAfA0drmqFGnM/g=; b=mx5wCxAmvZg0i6ol2LaSGyLT44 CxvPd/9fdlWmr2TxFf5RpjGlB2FY1BoBYpf+v+X8HSpJ0s0LdN/ijmpHTycSjpjShxE3zRR5drKFc U/Zog/l582GPFD22R0swDs6BpMeeV2Ud4oOeEpKE+9rAJ/xEEjfVAnJM2D2BnTm7rmWApYQKyr0l9 a4OMUcW+x4vyahFylZB/x4EbDFmQPEAV4DjHt3WK+fVqg13S+BlhFUb4EYB8w2A0tF0vc4NE2v9wW wUm+jx3C0i2aGls+NjKmOMMEizlMkujMCKOc6/NAtnG9tNswgPc/c6Q+H8rJLWAJg7b4b3dnP9/1i htToIvAQ==; Received: from 213-225-6-159.nat.highway.a1.net ([213.225.6.159] helo=localhost) by bombadil.infradead.org with esmtpsa (Exim 4.92 #3 (Red Hat Linux)) id 1hfmaE-00087r-Qt; Tue, 25 Jun 2019 14:38:07 +0000 From: Christoph Hellwig To: Andrew Morton , Linus Torvalds , Paul Burton , James Hogan , Yoshinori Sato , Rich Felker , "David S. Miller" Cc: Nicholas Piggin , Khalid Aziz , Andrey Konovalov , Benjamin Herrenschmidt , Paul Mackerras , Michael Ellerman , linux-mips@vger.kernel.org, linux-sh@vger.kernel.org, sparclinux@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, linux-mm@kvack.org, x86@kernel.org, linux-kernel@vger.kernel.org Subject: [PATCH 15/16] mm: switch gup_hugepte to use try_get_compound_head Date: Tue, 25 Jun 2019 16:37:14 +0200 Message-Id: <20190625143715.1689-16-hch@lst.de> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20190625143715.1689-1-hch@lst.de> References: <20190625143715.1689-1-hch@lst.de> MIME-Version: 1.0 X-SRS-Rewrite: SMTP reverse-path rewritten from by bombadil.infradead.org. See http://www.infradead.org/rpr.html X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: X-Virus-Scanned: ClamAV using ClamSMTP This applies the overflow fixes from 8fde12ca79aff ("mm: prevent get_user_pages() from overflowing page refcount") to the powerpc hugepd code and brings it back in sync with the other GUP cases. Signed-off-by: Christoph Hellwig --- mm/gup.c | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-) diff --git a/mm/gup.c b/mm/gup.c index 7077549bc8ea..e06447cff635 100644 --- a/mm/gup.c +++ b/mm/gup.c @@ -2006,7 +2006,8 @@ static int gup_hugepte(pte_t *ptep, unsigned long sz, unsigned long addr, refs++; } while (addr += PAGE_SIZE, addr != end); - if (!page_cache_add_speculative(head, refs)) { + head = try_get_compound_head(head, refs); + if (!head) { *nr -= refs; return 0; } From patchwork Tue Jun 25 14:37:15 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Christoph Hellwig X-Patchwork-Id: 11015737 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 490611398 for ; Tue, 25 Jun 2019 14:38:46 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 3D14F2863C for ; Tue, 25 Jun 2019 14:38:46 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 306A428657; Tue, 25 Jun 2019 14:38:46 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-2.7 required=2.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,MAILING_LIST_MULTI,RCVD_IN_DNSWL_NONE autolearn=unavailable version=3.3.1 Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id DC14D2863C for ; Tue, 25 Jun 2019 14:38:45 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 98AD08E0012; Tue, 25 Jun 2019 10:38:28 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 8C3EF8E000B; Tue, 25 Jun 2019 10:38:28 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 6F0F88E0012; Tue, 25 Jun 2019 10:38:28 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from mail-pf1-f197.google.com (mail-pf1-f197.google.com [209.85.210.197]) by kanga.kvack.org (Postfix) with ESMTP id 2F44A8E000B for ; Tue, 25 Jun 2019 10:38:28 -0400 (EDT) Received: by mail-pf1-f197.google.com with SMTP id 6so12003872pfi.6 for ; Tue, 25 Jun 2019 07:38:28 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:dkim-signature:from:to:cc:subject:date :message-id:in-reply-to:references:mime-version :content-transfer-encoding; bh=xf8xwUJqGTGhdY5pjBOHQgYHNWWmClOkIpScch6iAnk=; b=KoptTZkJm80gnsrxJrcig4xsMtj5QMQHoOUHTVQ2FaYAqy0ZGYB/6nWnmt3kwMzud3 aU9skdSuK7wtTHS7jUcyYHtozlNAwAlXIWVTtXgyBvpFXXO8UzobFimFqd2X3buQ/CSv 0X1yP6Td2mKFJhKh0GMpcIuUiCjX0zSSnxh7a2juS3OtmNxcb/DYMZhrTGTXn5mFEGcg MXCJ1R7FSITPMQBGb8uEOdIy4v7uAExEsiupk2fqPr8irgO/fACSp061Xr2umDiuSqgv P907h2dR9q3/T7fIBFjyhrNgx+ocegMq0cc1PSxDMe+L+CWQvIvjmach1BgxPmhg2Sd+ yR6g== X-Gm-Message-State: APjAAAWSgG46x2EbNjMkN8kNX9Lxnd0OZYfDY3iUXRYwa5MVSo+A1WVX xoWV1qUNpDVhb73JnIZmnDB0oEREzsUl+P83B+hkc7xG+vzZNRdMG7sVH9MRuo4WyYg+JhCyOtp iFW4QlBrP/pAXU35I/O4FLmbT0XExEr/jNKqHRC3XV/h1vhgY3X2auHfOdAn8uMk= X-Received: by 2002:a63:8b44:: with SMTP id j65mr27776393pge.241.1561473507806; Tue, 25 Jun 2019 07:38:27 -0700 (PDT) X-Google-Smtp-Source: APXvYqwEEYWL2mkRwD+dKjW3a31dK/ZihO5yx1/N6EUsDJ/gNymGh3IslPvFbswfJAWQwA6dl6Zs X-Received: by 2002:a63:8b44:: with SMTP id j65mr27776336pge.241.1561473507136; Tue, 25 Jun 2019 07:38:27 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1561473507; cv=none; d=google.com; s=arc-20160816; b=OjIqaa1EvRwfhvpAZhMHHewU4kw9Do+GlV+yd4qFBzBAqabmbmZBWSX+9Vmrg7ZOsG CR9cIxVJjXqRl0yCJLrWj8ebfUUspiPHxhj9ABcG+SbtP77dUM1V0GAFBCtoT8sCK4vf FcGjQd41oad4U925ddQVib0T0LjymxBnBIsxkSMYw4u4kLEoX9mgomtxuzcy0lTqnTZ6 cEKdIAiQAz3YiXMIA9/HGIs6bkUuAzL8Q1SSy3P2YNkGLzzQBeerhp5p+gLhFXISpxlM biN7Qx2sYNpdSqZiM5/m40BXEohIHADigybyAES5ix8fJC97qrPWhN6A8f6DBv6dF0b1 3sSQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:dkim-signature; bh=xf8xwUJqGTGhdY5pjBOHQgYHNWWmClOkIpScch6iAnk=; b=NHnqS+CRBWr5JLclRWj8S1Bi7h7EIysAU+kP2vzGDw9iVnaWCjm8vq13QkSEpR6NR9 X9+uQB0I360qEMHpaerf5Jz/lKVKKWqL9nKHawPJX0KbuM74qmAJ2Qzxlbmv6Yz81GLR WqhEaSV1mPJZM1ivY+1+r5VJuBm/6mKpcE7PZZpQrebDaw8QGQ1nu1PIFQZ0/lhkChlN L/O/EjxEK1KWoHZ+XxvZ7zLPWNkpyawhnUMyGqMDxzV/D4yv9ewJPN7zCfY4k6Wgw9Ts 4ot1sfMxdCZU3ugK5ubFZcDl8MLLpEe8t0G+DmK8gzhF3UdQAWV529q2u58NLApUBmba /qHw== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@infradead.org header.s=bombadil.20170209 header.b=k2Z56jK2; spf=pass (google.com: best guess record for domain of batv+c5155a46dc30cc8634d8+5784+infradead.org+hch@bombadil.srs.infradead.org designates 2607:7c80:54:e::133 as permitted sender) smtp.mailfrom=BATV+c5155a46dc30cc8634d8+5784+infradead.org+hch@bombadil.srs.infradead.org Received: from bombadil.infradead.org (bombadil.infradead.org. [2607:7c80:54:e::133]) by mx.google.com with ESMTPS id o4si483688plb.274.2019.06.25.07.38.27 for (version=TLS1_3 cipher=AEAD-AES256-GCM-SHA384 bits=256/256); Tue, 25 Jun 2019 07:38:27 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of batv+c5155a46dc30cc8634d8+5784+infradead.org+hch@bombadil.srs.infradead.org designates 2607:7c80:54:e::133 as permitted sender) client-ip=2607:7c80:54:e::133; Authentication-Results: mx.google.com; dkim=pass header.i=@infradead.org header.s=bombadil.20170209 header.b=k2Z56jK2; spf=pass (google.com: best guess record for domain of batv+c5155a46dc30cc8634d8+5784+infradead.org+hch@bombadil.srs.infradead.org designates 2607:7c80:54:e::133 as permitted sender) smtp.mailfrom=BATV+c5155a46dc30cc8634d8+5784+infradead.org+hch@bombadil.srs.infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=bombadil.20170209; h=Content-Transfer-Encoding: MIME-Version:References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender :Reply-To:Content-Type:Content-ID:Content-Description:Resent-Date:Resent-From :Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Id:List-Help: List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive; bh=xf8xwUJqGTGhdY5pjBOHQgYHNWWmClOkIpScch6iAnk=; b=k2Z56jK2d5xI+yMTjSN6h/vuiP u0SXzPC5Gaaj+zKYS9DBaui94DW5JdYiyeLGMMheoNYb7P1thRVfck4ZdTInGgIhRsvlkKuVafPuT qoEpZkZKHIQ/5Yo/qDaES8pLnbs8H4v49XR4BJswGTUAtFhXZ6YE/EB8y06wbIQVXnxShdZGCT/mg oO6SyoF/565YBqQO+A8hQbcrcVfzoWPdpW0JqHQjy9NjYjmi6vmVYpj0yblBNGtKd6ZM5x0og1UTB RPTRcqRD3JNKL9CoM58cLDWs9bCXv7Bra0aDAA5QpvIx11coqbSOkucnqEEJaTJsAzvMcQnEY09x9 vYVBnHwg==; Received: from 213-225-6-159.nat.highway.a1.net ([213.225.6.159] helo=localhost) by bombadil.infradead.org with esmtpsa (Exim 4.92 #3 (Red Hat Linux)) id 1hfmaI-0008BD-Qf; Tue, 25 Jun 2019 14:38:11 +0000 From: Christoph Hellwig To: Andrew Morton , Linus Torvalds , Paul Burton , James Hogan , Yoshinori Sato , Rich Felker , "David S. Miller" Cc: Nicholas Piggin , Khalid Aziz , Andrey Konovalov , Benjamin Herrenschmidt , Paul Mackerras , Michael Ellerman , linux-mips@vger.kernel.org, linux-sh@vger.kernel.org, sparclinux@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, linux-mm@kvack.org, x86@kernel.org, linux-kernel@vger.kernel.org Subject: [PATCH 16/16] mm: mark the page referenced in gup_hugepte Date: Tue, 25 Jun 2019 16:37:15 +0200 Message-Id: <20190625143715.1689-17-hch@lst.de> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20190625143715.1689-1-hch@lst.de> References: <20190625143715.1689-1-hch@lst.de> MIME-Version: 1.0 X-SRS-Rewrite: SMTP reverse-path rewritten from by bombadil.infradead.org. See http://www.infradead.org/rpr.html X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: X-Virus-Scanned: ClamAV using ClamSMTP All other get_user_page_fast cases mark the page referenced, so do this here as well. Signed-off-by: Christoph Hellwig --- mm/gup.c | 1 + 1 file changed, 1 insertion(+) diff --git a/mm/gup.c b/mm/gup.c index e06447cff635..d9d022d835ca 100644 --- a/mm/gup.c +++ b/mm/gup.c @@ -2020,6 +2020,7 @@ static int gup_hugepte(pte_t *ptep, unsigned long sz, unsigned long addr, return 0; } + SetPageReferenced(head); return 1; }