From patchwork Mon Jun 19 23:36:20 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Kees Cook X-Patchwork-Id: 9798067 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id A036C60381 for ; Mon, 19 Jun 2017 23:39:41 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 9C56127861 for ; Mon, 19 Jun 2017 23:39:41 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 90CD527F92; Mon, 19 Jun 2017 23:39:41 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-2.4 required=2.0 tests=BAYES_00,DKIM_SIGNED, RCVD_IN_DNSWL_MED, T_DKIM_INVALID, URIBL_BLACK autolearn=ham version=3.3.1 Received: from mother.openwall.net (mother.openwall.net [195.42.179.200]) by mail.wl.linuxfoundation.org (Postfix) with SMTP id CC38827861 for ; Mon, 19 Jun 2017 23:39:38 +0000 (UTC) Received: (qmail 14026 invoked by uid 550); 19 Jun 2017 23:37:19 -0000 Mailing-List: contact kernel-hardening-help@lists.openwall.com; run by ezmlm Precedence: bulk List-Post: List-Help: List-Unsubscribe: List-Subscribe: List-ID: Delivered-To: mailing list kernel-hardening@lists.openwall.com Received: (qmail 11556 invoked from network); 19 Jun 2017 23:37:01 -0000 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=chromium.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=ACczL0P17AOtfYCq6YyCGggxbxlUUyG6NqeIo8BXCdo=; b=MPZfKnjeFr6PeziG+oQZ+GlelXIKn1ws6f2IKpr//Q8zvo+xC7pZ/78+O1uyH77I5V VUBcq6H6OqNm0WYtmEgk0L2HEaS0f/Fa+59Zuct61sBJDRq4Kh2imz+SztSGSxeFQIwb DZwepUlgNJHCbROZ3YKJgCde4fo6SAyvsNlwU= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=ACczL0P17AOtfYCq6YyCGggxbxlUUyG6NqeIo8BXCdo=; b=mTsP4DIPy4OEQpp3kaqXJmAOHCK+coxTBEyh7toSYwGiHN0GcXMMXYJItYNYIfsASB tyfSZV319pXXs95g2G8xk0jwHW3nRIPdGAFMIMv5BSiiqhVysajZzbgQaeI/5N64XmB1 IAPFPDZnGi7/3GIJdAOKAFjYZgWI3wVeu37IdLxrmMsti/BmCWAUWzVVa3T+kU2Y0F81 2xR9LlxBP4meMltWAfI92hHWx68qA+3qx/XWpRMYa2kzWT7yNNwKy7DoM3lii88y3PhF bDGKUOPCrz+xOfp/fTH8JJWsnL0zWkUId1s96lFDVm75VcrxjIG6h8ihtNbk6NdMeL50 ociQ== X-Gm-Message-State: AKS2vOzTR82g55GidWCrM9Ac4T544G+q/FDa+muxZbHBbwk0EcX1qw8k kGA30zA+Bu5Nunay X-Received: by 10.84.174.129 with SMTP id r1mr32843483plb.122.1497915409596; Mon, 19 Jun 2017 16:36:49 -0700 (PDT) From: Kees Cook To: kernel-hardening@lists.openwall.com Cc: Kees Cook , David Windsor , linux-mm@kvack.org, linux-kernel@vger.kernel.org Date: Mon, 19 Jun 2017 16:36:20 -0700 Message-Id: <1497915397-93805-7-git-send-email-keescook@chromium.org> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1497915397-93805-1-git-send-email-keescook@chromium.org> References: <1497915397-93805-1-git-send-email-keescook@chromium.org> Subject: [kernel-hardening] [PATCH 06/23] cifs: define usercopy region in cifs_request slab cache X-Virus-Scanned: ClamAV using ClamSMTP From: David Windsor cifs request buffers, stored in the cifs_request slab cache, need to be copied to/from userspace. In support of usercopy hardening, this patch defines a region in the cifs_request slab cache in which userspace copy operations are allowed. This region is known as the slab cache's usercopy region. Slab caches can now check that each copy operation involving cache-managed memory falls entirely within the slab's usercopy region. This patch is verbatim from Brad Spengler/PaX Team's PAX_USERCOPY whitelisting code in the last public patch of grsecurity/PaX based on my understanding of the code. Changes or omissions from the original code are mine and don't reflect the original grsecurity/PaX code. Signed-off-by: David Windsor [kees: adjust commit log] Signed-off-by: Kees Cook --- fs/cifs/cifsfs.c | 10 ++++++---- 1 file changed, 6 insertions(+), 4 deletions(-) diff --git a/fs/cifs/cifsfs.c b/fs/cifs/cifsfs.c index 9a1667e0e8d6..385c5cc8903e 100644 --- a/fs/cifs/cifsfs.c +++ b/fs/cifs/cifsfs.c @@ -1234,9 +1234,11 @@ cifs_init_request_bufs(void) cifs_dbg(VFS, "CIFSMaxBufSize %d 0x%x\n", CIFSMaxBufSize, CIFSMaxBufSize); */ - cifs_req_cachep = kmem_cache_create("cifs_request", + cifs_req_cachep = kmem_cache_create_usercopy("cifs_request", CIFSMaxBufSize + max_hdr_size, 0, - SLAB_HWCACHE_ALIGN, NULL); + SLAB_HWCACHE_ALIGN, 0, + CIFSMaxBufSize + max_hdr_size, + NULL); if (cifs_req_cachep == NULL) return -ENOMEM; @@ -1262,9 +1264,9 @@ cifs_init_request_bufs(void) more SMBs to use small buffer alloc and is still much more efficient to alloc 1 per page off the slab compared to 17K (5page) alloc of large cifs buffers even when page debugging is on */ - cifs_sm_req_cachep = kmem_cache_create("cifs_small_rq", + cifs_sm_req_cachep = kmem_cache_create_usercopy("cifs_small_rq", MAX_CIFS_SMALL_BUFFER_SIZE, 0, SLAB_HWCACHE_ALIGN, - NULL); + 0, MAX_CIFS_SMALL_BUFFER_SIZE, NULL); if (cifs_sm_req_cachep == NULL) { mempool_destroy(cifs_req_poolp); kmem_cache_destroy(cifs_req_cachep);