From patchwork Mon Apr 13 22:49:03 2015 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Michael Davidson X-Patchwork-Id: 6212281 Return-Path: X-Original-To: patchwork-linux-fsdevel@patchwork.kernel.org Delivered-To: patchwork-parsemail@patchwork1.web.kernel.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.136]) by patchwork1.web.kernel.org (Postfix) with ESMTP id F2FD79F1C4 for ; Mon, 13 Apr 2015 22:49:57 +0000 (UTC) Received: from mail.kernel.org (localhost [127.0.0.1]) by mail.kernel.org (Postfix) with ESMTP id 25F9D202DD for ; Mon, 13 Apr 2015 22:49:57 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id 383112021F for ; Mon, 13 Apr 2015 22:49:56 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1752063AbbDMWty (ORCPT ); Mon, 13 Apr 2015 18:49:54 -0400 Received: from mail-ig0-f176.google.com ([209.85.213.176]:35143 "EHLO mail-ig0-f176.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751946AbbDMWtx (ORCPT ); Mon, 13 Apr 2015 18:49:53 -0400 Received: by igbyr2 with SMTP id yr2so23802116igb.0 for ; Mon, 13 Apr 2015 15:49:53 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20120113; h=from:to:cc:subject:date:message-id; bh=II2/mRtUwGgU95cgjv9jkTo8z8p+w00VNTRKIzFst1c=; b=XiF4CmMg878lSGKZW1mt2w7/GbKydyHWAqt2UdKdFApnXNni0fzrJQN7KWkBrn23eK UoXLJtG5tYIwXwBvBtC1CDVWlFDt/uh/vUKVYEOCZTEIKcaewBO9EQcDNZte3+p69rz4 mbltb64HZlravV/ZGoqFW720nUIFCbO5rWWbd96esreImdPYegJfS+0gzbDJMiuNGWD3 Oe8UF+4aehfBlMtNyq8h/y2MH/kURJY+D8TrhX9UE+MLEvJhNgqYlwn2HQps1zftN6BZ PauXm3x5FPUs0VCPImdQQtSkTVfl1U3HSz3zkGWLQjWJjtTepaDV4D+IkBQ0+VmCrA4n +BsA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:from:to:cc:subject:date:message-id; bh=II2/mRtUwGgU95cgjv9jkTo8z8p+w00VNTRKIzFst1c=; b=KWseMNKbmKIRxfcgOi9IM3GrweoxDUzAg2MAf24QidhC/h0z58BGfXiA9F4T1jQTQZ dixgMO00tBSYxB6V7FBeCxUvOUBw1BXjnBzhDBw+G2az6NWCPTN7JbR4BN4zySoRVm9A Okx3DXsF118XkJqXjXO2oW6oSBNQ1DionDRHRpjq4sMEPFIGgMSS9SoPmdBMCAbzwO4U aDQoyTImt2EoKJLe0C0W/U7/LRWcvaXQQwEVAJcZsFAvVwe7tO8ZZXjIhr6/EGXfSELx Z/ijRp9FVACfufIrBey2QV2Q2vbghkgCLpK96xguSE2eXCRXQwi8lT70yGP+pHZd0Rbf b+ig== X-Gm-Message-State: ALoCoQngQqLUPX1HMdVoEQSGry2paDVrN7N/0EJR3D3x1OI4Du04zuVJHH96+8A9DOcXx22v4qT5 X-Received: by 10.42.10.132 with SMTP id q4mr21704950icq.62.1428965392914; Mon, 13 Apr 2015 15:49:52 -0700 (PDT) Received: from md-linux.mtv.corp.google.com ([172.17.131.11]) by mx.google.com with ESMTPSA id ig15sm127621igb.10.2015.04.13.15.49.45 (version=TLSv1.2 cipher=ECDHE-RSA-AES128-SHA bits=128/128); Mon, 13 Apr 2015 15:49:52 -0700 (PDT) From: Michael Davidson To: Alexander Viro , Jiri Kosina , Andrew Morton Cc: linux-fsdevel@vger.kernel.org, linux-kernel@vger.kernel.org, Michael Davidson Subject: [PATCH] binfmt_elf: Fix bug in loading of PIE binaries. Date: Mon, 13 Apr 2015 15:49:03 -0700 Message-Id: <1428965343-17762-1-git-send-email-md@google.com> X-Mailer: git-send-email 2.2.0.rc0.207.ga3a616c Sender: linux-fsdevel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-fsdevel@vger.kernel.org X-Spam-Status: No, score=-6.8 required=5.0 tests=BAYES_00, DKIM_ADSP_CUSTOM_MED, DKIM_SIGNED,RCVD_IN_DNSWL_HI,T_DKIM_INVALID,T_RP_MATCHES_RCVD, UNPARSEABLE_RELAY autolearn=ham version=3.3.1 X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on mail.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP With CONFIG_ARCH_BINFMT_ELF_RANDOMIZE_PIE enabled, and a normal top-down address allocation strategy, load_elf_binary() will attempt to map a PIE binary into an address range immediately below mm->mmap_base. Unfortunately, load_elf_ binary() does not take account of the need to allocate sufficient space for the entire binary which means that, while the first PT_LOAD segment is mapped below mm->mmap_base, the subsequent PT_LOAD segment(s) end up being mapped above mm->mmap_base into the are that is supposed to be the "gap" between the stack and the binary. Since the size of the "gap" on x86_64 is only guaranteed to be 128MB this means that binaries with large data segments > 128MB can end up mapping part of their data segment over their stack resulting in corruption of the stack (and the data segment once the binary starts to run). Any PIE binary with a data segment > 128MB is vulnerable to this although address randomization means that the actual gap between the stack and the end of the binary is normally greater than 128MB. The larger the data segment of the binary the higher the probability of failure. Fix this by calculating the total size of the binary in the same way as load_elf_interp(). Signed-off-by: Michael Davidson --- fs/binfmt_elf.c | 9 ++++++++- 1 file changed, 8 insertions(+), 1 deletion(-) diff --git a/fs/binfmt_elf.c b/fs/binfmt_elf.c index 995986b..d925f55 100644 --- a/fs/binfmt_elf.c +++ b/fs/binfmt_elf.c @@ -862,6 +862,7 @@ static int load_elf_binary(struct linux_binprm *bprm) i < loc->elf_ex.e_phnum; i++, elf_ppnt++) { int elf_prot = 0, elf_flags; unsigned long k, vaddr; + unsigned long total_size = 0; if (elf_ppnt->p_type != PT_LOAD) continue; @@ -924,10 +925,16 @@ static int load_elf_binary(struct linux_binprm *bprm) #else load_bias = ELF_PAGESTART(ELF_ET_DYN_BASE - vaddr); #endif + total_size = total_mapping_size(elf_phdata, + loc->elf_ex.e_phnum); + if (!total_size) { + error = -EINVAL; + goto out_free_dentry; + } } error = elf_map(bprm->file, load_bias + vaddr, elf_ppnt, - elf_prot, elf_flags, 0); + elf_prot, elf_flags, total_size); if (BAD_ADDR(error)) { retval = IS_ERR((void *)error) ? PTR_ERR((void*)error) : -EINVAL;