From patchwork Tue Jan 15 18:56:40 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ruslan Nikolaev X-Patchwork-Id: 10764997 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id C2FC86C5 for ; Tue, 15 Jan 2019 19:08:37 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id B20492CE93 for ; Tue, 15 Jan 2019 19:08:37 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id A64092D74A; Tue, 15 Jan 2019 19:08:37 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-5.3 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,FREEMAIL_FROM,MAILING_LIST_MULTI,RCVD_IN_DNSWL_MED autolearn=ham version=3.3.1 Received: from mother.openwall.net (mother.openwall.net [195.42.179.200]) by mail.wl.linuxfoundation.org (Postfix) with SMTP id 805FA2CE93 for ; Tue, 15 Jan 2019 19:08:36 +0000 (UTC) Received: (qmail 21878 invoked by uid 550); 15 Jan 2019 19:08:35 -0000 Mailing-List: contact kernel-hardening-help@lists.openwall.com; run by ezmlm Precedence: bulk List-Post: List-Help: List-Unsubscribe: List-Subscribe: List-ID: Delivered-To: mailing list kernel-hardening@lists.openwall.com Delivered-To: moderator for kernel-hardening@lists.openwall.com Received: (qmail 8070 invoked from network); 15 Jan 2019 18:56:55 -0000 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=yahoo.com; s=s2048; t=1547578603; bh=g+RHT64K4ZMqKcL4K/TRpWO5qyU4voXfUD6JhAkwDkc=; h=To:Cc:From:Subject:Date:From:Subject; b=PjtpSL5iThOfM5nBBrykwoqmhMjRjgJ3vvsYqlOoYyocdfFQs3acLJpUi1QynpXJKnyYs8byV2lMcpfKdAqViyNha5Wph5+y8KrT6pvdsGf8FEFJjl9fsdLeFVO+ZUr1QTZ8sLlY8oKWq83g7RHwA65+RkzIrS9LD0xepiNC/KWkLQe/UettcpIUWzm2su6M0Q8MZp1b/jNXxe4UdioG/rqCntrMJHtIvpiPQtolHGJ3duCKTOGa3z8Hwz44KibxbCOUuVCYSKzrvskgAwPA+YVpzQuDJTr2yDXlXOCCcuA3W2icgVySsmCFEVpkDytVtx6wWZ2j2bTf+8p9N4pJfQ== X-YMail-OSG: Czr5xfkVM1k6WEwbtgYgT3pfANgCJZrqt7R1gFdocRuGAz16YlNwoHMX7c91yO. MVofJelOnb8sLF8YYvMsf45Icq31SJdx8E7n.Ej3fOrEitOTqwaOXkCnquw1XEU.mva4Qq0gjV1B COIKMs4truPzNZQlUqkkBYXW8W0xnEIZ1plHCfFQIWsKFpPcKF2zpj9evCPY77bHEeohdfEm7hXW G1I0_O2_28zloFHhe.w68Nr1cUAn5ULBY17GwHbMh6GvGn3br3L.2IZOCckab46Rnb.fhI8e39Eo 1TodItnzIQc.scB41OvUI7uZ6eHnZ0MpFPnJ9r0VMbK5bwPDLRAeAyPQeqYVREmZJ692lNvaYviO mMAJnr7TqWq.61FDC7D_UT164i9lI1W0O0JIVSB3UtPuY.L7V77exe1JeuY3mxPFw3SUq3xBY1SD Z1FDqJNj080TD7EttNTqHxqDFRVkzoGufO9z1Jv2ZmbHLLn7meVExz8Pmqrb6x_nI5UN2UhqZ34y ANW8Rxfw24YrUCotaLNiT.qnmAqgsPnHd.Da3fS85U64vh6jId3xXkxuWWxIKOOOCA5EC6pQ22k0 nIB5CY.nBt4GT5FlmAO2ye_GYeZ_ftjWYSlgcZenSVweoHKKqRbnSHU61Mqk1KFBMpHRk.uV9u_U aJky16IYLxbInCPzMA00DI.mGoiSn0Z7YS4.ZuBD5vKA2ll79f0JxOhzfpoBz0BppPR0ZOtVU5Cu uGCxjQcFCELP399YVPw9.yZilM1xLbFmE2vDJUE2Hp1NxkdHPOl.HKpXFojX7upwnoWbkV1aiIxC dxg0V0fkNjhnSQaxnjH5SpRN7WGl6x0MvB7zXOUCgLhoaSfCnM8hCXHTwTSMbuhpg_t5SLpaFslE dJL8JrC5YjYqVpi3enabMN3EmcdKKCqTfBl1X2EgcQJCR493RPXNn.sw__mExs06THbm.kque4NV 3VlQAfuhISehTxo23jHMKdHsqoSovC2NaHy79vMuAcrRi6Mcs7zr0XAuxH4e4tS0zLQEfxWCBPGC feLgXWv.A.FchAFSQz_LAJh1qRRnEydYAxWXCXTLzxOMmeWs9NbSlK.ctBWk- To: kernel-hardening@lists.openwall.com Cc: thgarnie@google.com, x86@kernel.org, kstewart@linuxfoundation.org, gregkh@linuxfoundation.org, keescook@chromium.org From: Ruslan Nikolaev Subject: [PATCH v1 01/06]: Extending objtool for PIC modules Message-ID: Date: Tue, 15 Jan 2019 13:56:40 -0500 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:60.0) Gecko/20100101 Thunderbird/60.2.1 MIME-Version: 1.0 Content-Language: en-US X-Virus-Scanned: ClamAV using ClamSMTP Extending objtool for PIC modules The patch is by Hassan Nadeem and Ruslan Nikolaev. This extends the prior PIE kernel patch (by Thomas Garnier) to also support position-independent modules that can be placed anywhere in the 48/64-bit address space (for better KASLR). Signed-off-by: Ruslan Nikolaev --- check.c | 39 ++++++++++++++++++++++++++++----------- 1 file changed, 28 insertions(+), 11 deletions(-) @@ -581,7 +583,7 @@ static int add_call_destinations(struct struct rela *rela; for_each_insn(file, insn) { - if (insn->type != INSN_CALL) + if (insn->type != INSN_CALL && insn->type != INSN_CALL_DYNAMIC) continue; rela = find_rela_by_dest_range(insn->sec, insn->offset, @@ -590,8 +592,8 @@ static int add_call_destinations(struct dest_off = insn->offset + insn->len + insn->immediate; insn->call_dest = find_symbol_by_offset(insn->sec, dest_off); - - if (!insn->call_dest && !insn->ignore) { + if (!insn->call_dest && !insn->ignore && + insn->type != INSN_CALL_DYNAMIC) { WARN_FUNC("unsupported intra-function call", insn->sec, insn->offset); if (retpoline) @@ -602,8 +604,9 @@ static int add_call_destinations(struct } else if (rela->sym->type == STT_SECTION) { insn->call_dest = find_symbol_by_offset(rela->sym->sec, rela->addend+4); - if (!insn->call_dest || - insn->call_dest->type != STT_FUNC) { + if ((!insn->call_dest || + insn->call_dest->type != STT_FUNC) && + insn->type != INSN_CALL_DYNAMIC) { WARN_FUNC("can't find call dest symbol at %s+0x%x", insn->sec, insn->offset, rela->sym->sec->name, @@ -836,6 +839,11 @@ static int add_switch_table(struct objto struct symbol *pfunc = insn->func->pfunc; unsigned int prev_offset = 0; + /* If PC32 relocations are used (as in PIC), the following logic + * can be broken in many ways. + */ + if (file->ignore_unreachables) + return 0; list_for_each_entry_from(rela, &file->rodata->rela->rela_list, list) { if (rela == next_table) break; @@ -1244,7 +1252,7 @@ static int decode_sections(struct objtoo static bool is_fentry_call(struct instruction *insn) { - if (insn->type == INSN_CALL && + if (insn->call_dest && insn->call_dest->type == STT_NOTYPE && !strcmp(insn->call_dest->name, "__fentry__")) return true; @@ -1889,6 +1897,7 @@ static int validate_branch(struct objtoo return 0; case INSN_CALL: + case INSN_CALL_DYNAMIC: if (is_fentry_call(insn)) break; @@ -1898,8 +1907,6 @@ static int validate_branch(struct objtoo if (ret == -1) return 1; - /* fallthrough */ - case INSN_CALL_DYNAMIC: if (!no_fp && func && !has_valid_stack_frame(&state)) { WARN_FUNC("call without frame pointer save/setup", sec, insn->offset); @@ -1929,12 +1936,15 @@ static int validate_branch(struct objtoo break; case INSN_JUMP_DYNAMIC: + /* XXX: Does not work properly with PIC code. */ +#if 0 if (func && list_empty(&insn->alts) && has_modified_stack_frame(&state)) { WARN_FUNC("sibling call from callable instruction with modified stack frame", sec, insn->offset); return 1; } +#endif return 0; @@ -2015,6 +2025,11 @@ static int validate_retpoline(struct obj if (!strcmp(insn->sec->name, ".init.text") && !module) continue; + /* ignore ftrace calls in PIC code */ + if (!insn->call_dest || + !strcmp(insn->call_dest->name, "__fentry__")) + continue; + WARN_FUNC("indirect %s found in RETPOLINE build", insn->sec, insn->offset, insn->type == INSN_JUMP_DYNAMIC ? "jump" : "call"); @@ -2027,13 +2042,15 @@ static int validate_retpoline(struct obj static bool is_kasan_insn(struct instruction *insn) { - return (insn->type == INSN_CALL && + return ((insn->type == INSN_CALL || insn->type == INSN_CALL_DYNAMIC) && + insn->call_dest && !strcmp(insn->call_dest->name, "__asan_handle_no_return")); } static bool is_ubsan_insn(struct instruction *insn) { - return (insn->type == INSN_CALL && + return ((insn->type == INSN_CALL || insn->type == INSN_CALL_DYNAMIC) && + insn->call_dest && !strcmp(insn->call_dest->name, "__ubsan_handle_builtin_unreachable")); } diff -uprN a/tools/objtool/check.c b/tools/objtool/check.c --- a/tools/objtool/check.c 2019-01-15 11:20:46.047176216 -0500 +++ b/tools/objtool/check.c 2019-01-15 11:20:57.727294197 -0500 @@ -179,7 +179,7 @@ static int __dead_end_function(struct ob return 0; insn = find_insn(file, func->sec, func->offset); - if (!insn->func) + if (!insn || !insn->func) return 0; func_for_each_insn_all(file, func, insn) { @@ -233,6 +233,8 @@ static int __dead_end_function(struct ob static int dead_end_function(struct objtool_file *file, struct symbol *func) { + if (!func) + return 0; return __dead_end_function(file, func, 0); } From patchwork Tue Jan 15 18:58:40 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ruslan Nikolaev X-Patchwork-Id: 10764999 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 2038514E5 for ; Tue, 15 Jan 2019 19:08:44 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 0EFAC2D887 for ; Tue, 15 Jan 2019 19:08:44 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 035E62C989; Tue, 15 Jan 2019 19:08:44 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-5.3 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,FREEMAIL_FROM,MAILING_LIST_MULTI,RCVD_IN_DNSWL_MED autolearn=ham version=3.3.1 Received: from mother.openwall.net (mother.openwall.net [195.42.179.200]) by mail.wl.linuxfoundation.org (Postfix) with SMTP id 32D172D90A for ; Tue, 15 Jan 2019 19:08:42 +0000 (UTC) Received: (qmail 22289 invoked by uid 550); 15 Jan 2019 19:08:38 -0000 Mailing-List: contact kernel-hardening-help@lists.openwall.com; run by ezmlm Precedence: bulk List-Post: List-Help: List-Unsubscribe: List-Subscribe: List-ID: Delivered-To: mailing list kernel-hardening@lists.openwall.com Delivered-To: moderator for kernel-hardening@lists.openwall.com Received: (qmail 11498 invoked from network); 15 Jan 2019 18:58:55 -0000 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=yahoo.com; s=s2048; t=1547578722; bh=UDbogFj/B62hIslut8veCoQk9mAPQDSK2geYHdtgfqs=; h=To:Cc:From:Subject:Date:From:Subject; b=W93PGxrDAQZ/8j+WAHCYsosie5BngFCxte4Ozkc0eoEazyH8aj9S6b3wxRrjdHCCTL7CPGmEfbNirrO/nlpwdthcmRqNOrVQD18zmhVLtWYSK6ox2s9DHCmWAGi31y+r0ja4YF+LvFpldgciQcsm07nERYbwVckksvGlTbBFI1WVSn1eUaXHss5iBOnn4RHXpbOOEPQ+wuoUQFO+FBueEP1Re+6IJvzDDkqirkqsTv+hfuiackrB2nl+mo4Wp5FnvRINmIVlqmUNIDOneneTRe8xcfbBVVFAshMTs1kPyi8JszCuDDbLVRr8ktdIHRnYsxBuet0Ctf7SP1AWXGB+pQ== X-YMail-OSG: t9Rt_9wVM1lRimfmtiw4tENTmOIBsGwIrQIlDFIDO1r_12r0iTvTR40NXdzXKxO MTW2HSxRVt2l2.F1xdC1_LMsbDYgTen9bzaiYxoFA7UJvihG4N7StgK7_v4wfmzSFP4mLXcoSYNf TKZYwA.vHyp8Sx9OOMLBO3IHtue0Ac9iE_5VzI7wdfj0jorNKm9wOClfBKqK2aXGCAg7tFnuZwun kLg4KjPIE8cMVGW.tk1sGOEhi6dvaVB8sNi3_EfSX_wRrGE4EfjmfDaMDEpLKpVBoSi0rFLAG1VQ xA4HcWSe3CVG8cnM06eytEV9ovgRRcc.Oa6Cyu_cn51rrJArNtWU6VH56Ro9ZRu1j1uKNOeUZe5x EZqCQTPV.bRJjlY54PIQ8EehnqIkjXt0Cc4vKvOWXvAx7fWAIFxy7vLk4sWyq9FEdnQvsGpH_RMg Rc.wBLH86SLsSa94V3cYQcmFGnrjg5TWgk08iB7.dneR764aNaYcO55UTIedZk6QQX1FRZcQjid7 I01iMEga3_ImHIAcFDfEfNnW15DcuNlr0YaLfjmyRofHCdLlXEcKjG1ARD4uXGduCvV5HnnTyl6f SsQseZSIp3A_6Wa8V1idN1VlW7E6GEf8OhxrMUdURYrQHq7Yq6V_p.oH6BhCONz5kCixoTDwRk2l hQmwtw0vwbB7tmNql9DJQ0ydmiT2rjnsi5meIRpo9ELOhsMUR_eADMCXcf3x99rbbLaus60NTE0O WclD_k4btNaSzPaTd1Jb86x65lUVqrTEGJFsmYCydSBbsXUgHm16eskiDYWbnkD6.iS.DZk6IAAf xU5bFMq1rvIqjqWsfu8wRJYGTBOBSnpwDixnvwuBFvU.RCfnYcZnUykeLr2kGyoGRLlwH4vyevQd lxEsP_0ktKqgxf9t_YfOSGMGjCnCpXg7nwKVhbuYqhlJFI0xNoo9Ta7wq4f6Vf_0y5pyuj9wjQIe Kh8aiBv24XYPe.LvPrH1vVdQqucsYbe4flrHRHkL994NWh_NqO0TqF.wndzBLJ1a.5OqKxwOiPqe 37iR70K..2z3ewbyFUeqg8TSAjyQE84rv.CYTX51tLo_zPKriq_ATfqWCGliMFhqcczd6NwaFejI SBEmy_US4 To: kernel-hardening@lists.openwall.com Cc: thgarnie@google.com, x86@kernel.org, kstewart@linuxfoundation.org, gregkh@linuxfoundation.org, keescook@chromium.org From: Ruslan Nikolaev Subject: [PATCH v1 02/06]: Avoid using the same object file in the kernel and modules Message-ID: <3633bff2-d07d-e135-a583-6fada6bf7671@yahoo.com> Date: Tue, 15 Jan 2019 13:58:40 -0500 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:60.0) Gecko/20100101 Thunderbird/60.2.1 MIME-Version: 1.0 Content-Language: en-US X-Virus-Scanned: ClamAV using ClamSMTP Avoid using the same object file in the kernel and modules The patch is by Hassan Nadeem and Ruslan Nikolaev. This extends the prior PIE kernel patch (by Thomas Garnier) to also support position-independent modules that can be placed anywhere in the 48/64-bit address space (for better KASLR). Signed-off-by: Ruslan Nikolaev --- Makefile | 2 +- entropy_common_dec.c | 2 ++ fse_decompress_dec.c | 2 ++ zstd_common_dec.c | 2 ++ 4 files changed, 7 insertions(+), 1 deletion(-) diff -uprN a/lib/zstd/entropy_common_dec.c b/lib/zstd/entropy_common_dec.c --- a/lib/zstd/entropy_common_dec.c 1969-12-31 19:00:00.000000000 -0500 +++ b/lib/zstd/entropy_common_dec.c 2019-01-15 11:22:25.688186400 -0500 @@ -0,0 +1,2 @@ +// SPDX-License-Identifier: BSD-2-Clause OR GPL-2.0 +#include "entropy_common.c" diff -uprN a/lib/zstd/fse_decompress_dec.c b/lib/zstd/fse_decompress_dec.c --- a/lib/zstd/fse_decompress_dec.c 1969-12-31 19:00:00.000000000 -0500 +++ b/lib/zstd/fse_decompress_dec.c 2019-01-15 11:22:25.688186400 -0500 @@ -0,0 +1,2 @@ +// SPDX-License-Identifier: BSD-2-Clause OR GPL-2.0 +#include "fse_decompress.c" diff -uprN a/lib/zstd/Makefile b/lib/zstd/Makefile --- a/lib/zstd/Makefile 2019-01-15 11:20:44.987165514 -0500 +++ b/lib/zstd/Makefile 2019-01-15 11:22:25.688186400 -0500 @@ -6,4 +6,4 @@ ccflags-y += -O3 zstd_compress-y := fse_compress.o huf_compress.o compress.o \ entropy_common.o fse_decompress.o zstd_common.o zstd_decompress-y := huf_decompress.o decompress.o \ - entropy_common.o fse_decompress.o zstd_common.o + entropy_common_dec.o fse_decompress_dec.o zstd_common_dec.o diff -uprN a/lib/zstd/zstd_common_dec.c b/lib/zstd/zstd_common_dec.c --- a/lib/zstd/zstd_common_dec.c 1969-12-31 19:00:00.000000000 -0500 +++ b/lib/zstd/zstd_common_dec.c 2019-01-15 11:22:25.688186400 -0500 @@ -0,0 +1,2 @@ +// SPDX-License-Identifier: BSD-3-Clause OR GPL-2.0 +#include "zstd_common.c" From patchwork Tue Jan 15 18:59:52 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ruslan Nikolaev X-Patchwork-Id: 10765001 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 481E76C5 for ; Tue, 15 Jan 2019 19:08:51 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 352F129E38 for ; Tue, 15 Jan 2019 19:08:51 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 296E82B02C; Tue, 15 Jan 2019 19:08:51 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-5.3 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,FREEMAIL_FROM,MAILING_LIST_MULTI,RCVD_IN_DNSWL_MED autolearn=ham version=3.3.1 Received: from mother.openwall.net (mother.openwall.net [195.42.179.200]) by mail.wl.linuxfoundation.org (Postfix) with SMTP id 6277C29E38 for ; Tue, 15 Jan 2019 19:08:50 +0000 (UTC) Received: (qmail 23816 invoked by uid 550); 15 Jan 2019 19:08:44 -0000 Mailing-List: contact kernel-hardening-help@lists.openwall.com; run by ezmlm Precedence: bulk List-Post: List-Help: List-Unsubscribe: List-Subscribe: List-ID: Delivered-To: mailing list kernel-hardening@lists.openwall.com Delivered-To: moderator for kernel-hardening@lists.openwall.com Received: (qmail 13365 invoked from network); 15 Jan 2019 19:00:10 -0000 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=yahoo.com; s=s2048; t=1547578798; bh=Evd54j5Gamllj+Hj/K5KYAeeO5seN3JPrVdZ0ZuB5w8=; h=To:Cc:From:Subject:Date:From:Subject; b=SIMznu8oOStD0b6RERyC5/6ja9VHojyVCBnxs5dIcUmEKG9QoLgtBk9ZfQQFtgVG1Hf0T4xUq6yRugvx015ecb+5rWfXhM7okluSzeupmlHvyX1uO4ntEVlMRu9XOy2n0Lt3twSjLSVBv1NPCnbWqQejBRKYrPM4tzfKge1p80Q+1XwalwJ76Bo8j5rLXITWkjrgSs3Toer0DuTTQj85U1zf2zW2b5vFnxF6r0XjMRDo2brjkJMmaElmV3l+0FxKaGcMJZRnACIYvwVmSrmBOFBoLMzkcXzpnOGGwXbxCUswLPVhsRjEuznfYGlo3CECjP/2SFfEHoQ1Oi9Tsy7Qcw== X-YMail-OSG: ccuY7WoVM1kQbSAEc6wiUpWhvIv4iper_Qx4nf6EPogHEnCkWxugHXMZYrUk1f8 15QqV1Cz_aWlCjeZJKgmR.FL5MO3wBlynKOiTr6cQlMkLzrPmc5UsdAOnCPcb4TTkBrJFeokNaFO SF2Ejt6eInQ87UmV71kajd8W6tfcj0gWnrr_8dCZi_qg076eZbsr05fESbg.HptwhlRnFYS9BC4G h52ts_eJ2fMQfNhx1XyLSqx356F2yKp72AtSZEFlHX_mw7FXHWXkIoYe9O7mnBbS7GeERht1OANc Cx3hk3zIo9BXGQvIWN4FTyV5248qCND6blCIHV4E5VSS0oheSRAikMVND1EOaL.1lHGNoeBUHkJF iwWd7KA59hfuKrXhP2C_4oxnt46EWXaYWhQ2LkE2B3gtgfqLs_0wAuk6fn3HqZMUZlhorHNic0FQ 4dEAI.oFSYR8mk5yR1PK1zh0TSJf8Sx7bfRtj9Lr9Nss3VQd1WPnoilAUK1DRmeHTQ2VPaJyIkgw Dr6z4mkC9pkrbbQ68JUMpvBvydtTBQOI5Iw.X5122PM_SAxwSPeiGScDyAqrxDbQavxcCTnRRANu yEYDeNI3sV78wz.i415.Hb6iaS9xFhHv8ZU2FW5dAXl363W3etY1BHp4OF17ip24zmkUp51e.gr1 IvPh_JmOdL64p5gVbQvCINLu9TvfVHovtsRi_uV5dSxsa2x7IHHS8mYxDxupqmK1mvLGQAuzZqC3 YAHaR.AX8NwLvGgWNUmYVJRyBMfUCdZe1LbYknW3gx1Y87e.5d9YeZFC5Uj9CcWMqT5WKONwPs3c n0kgYYEUPUWlGdaesSTix_d1BZP.rsXbHp.hiaOvvv7tGVz_Ew1.zz1y6kromLqswwJpGHXvvzzU MsshiXG7dp.RmiVzBl9rSE4ZXiZnM_2ngoe3CIIPJRTu91Xwiz2BB_goH8EMTDxEFSwGr9Wsl9Y4 .tbhKF8AHvLUP5LHiPHMJXJJ2O8xUYtIoH8yyqzHmz9LXyyBqkW0tUBtZ.5TFbLEvUC_EDePMMKn WNlfxId9RyrGbhCBva5QDL59hhIafI8vlJtW7A39LMH4cYYcTicFqtY7CzjvfbYA1mBFVdt68p9B 72AAPSgzlaig- To: kernel-hardening@lists.openwall.com Cc: thgarnie@google.com, x86@kernel.org, kstewart@linuxfoundation.org, gregkh@linuxfoundation.org, keescook@chromium.org From: Ruslan Nikolaev Subject: [PATCH v1 03/06]: Export individual Xen hypercalls Message-ID: <2d406c4a-8a37-fe1a-acd5-0e2758e234b2@yahoo.com> Date: Tue, 15 Jan 2019 13:59:52 -0500 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:60.0) Gecko/20100101 Thunderbird/60.2.1 MIME-Version: 1.0 Content-Language: en-US X-Virus-Scanned: ClamAV using ClamSMTP Export individual Xen hypercalls The patch is by Hassan Nadeem and Ruslan Nikolaev. This extends the prior PIE kernel patch (by Thomas Garnier) to also support position-independent modules that can be placed anywhere in the 48/64-bit address space (for better KASLR). Signed-off-by: Ruslan Nikolaev --- xen-head.S | 2 ++ 1 file changed, 2 insertions(+) diff -uprN a/arch/x86/xen/xen-head.S b/arch/x86/xen/xen-head.S --- a/arch/x86/xen/xen-head.S 2019-01-15 11:20:45.279168462 -0500 +++ b/arch/x86/xen/xen-head.S 2019-01-15 11:28:54.676189964 -0500 @@ -13,6 +13,7 @@ #include #include #include +#include #include #include @@ -66,6 +67,7 @@ ENTRY(hypercall_page) .endr #define HYPERCALL(n) \ + EXPORT_SYMBOL_GPL(xen_hypercall_##n); \ .equ xen_hypercall_##n, hypercall_page + __HYPERVISOR_##n * 32; \ .type xen_hypercall_##n, @function; .size xen_hypercall_##n, 32 #include From patchwork Tue Jan 15 19:01:06 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ruslan Nikolaev X-Patchwork-Id: 10765003 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 9653814E5 for ; Tue, 15 Jan 2019 19:09:00 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 863412C9D3 for ; Tue, 15 Jan 2019 19:09:00 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 77C842CB75; Tue, 15 Jan 2019 19:09:00 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-5.3 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,FREEMAIL_FROM,MAILING_LIST_MULTI,RCVD_IN_DNSWL_MED autolearn=ham version=3.3.1 Received: from mother.openwall.net (mother.openwall.net [195.42.179.200]) by mail.wl.linuxfoundation.org (Postfix) with SMTP id 9A2032C9D3 for ; Tue, 15 Jan 2019 19:08:59 +0000 (UTC) Received: (qmail 24490 invoked by uid 550); 15 Jan 2019 19:08:51 -0000 Mailing-List: contact kernel-hardening-help@lists.openwall.com; run by ezmlm Precedence: bulk List-Post: List-Help: List-Unsubscribe: List-Subscribe: List-ID: Delivered-To: mailing list kernel-hardening@lists.openwall.com Delivered-To: moderator for kernel-hardening@lists.openwall.com Received: (qmail 14229 invoked from network); 15 Jan 2019 19:01:21 -0000 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=yahoo.com; s=s2048; t=1547578869; bh=pxU81j2x8FrO/ArGrAjzwZfBoFcj0g1WM+JNM5m42Js=; h=To:Cc:From:Subject:Date:From:Subject; b=gRe9wNONL2dyppkluPxlbWE7rCfr5w1v9OVUb330bRyzPyeI1wn64DbemepiUqSvr3nVYyhHaD0JfQJKVG18cZX0OrKiVjpLB2989Vag2H06QeceZ5axv4orE+gcWKbk650Zw/4v94ufUbnVUO0V9daEP+799TI+L90nj5QgR7qOYlU8sE8Bdvpq4GectsvpN9qHHFaAS81N135LzBLhTSG3zqtwMfGVJ0mxoTWBAOaDevmVROEsZROHfhd2VSCRv7Yo5fvyR1ehoBuKefoeuzSL6dP8fVrJEuSlqtUrN+q32tGOgAI/ViRxFQzDv3XF19jiO6CNXdWq4owB0uMHfA== X-YMail-OSG: wcEuJjYVM1nAuce.W.CFSZBih3UIVBSzE9xzLzaVqlaq.E96fLcAlNzM2zTReBs CHmhfsOb9AkMxOFuf.V0ldZo.lbrbPNAo09PJeHKJIvP3A4KrFjBtkZXHdvBTnHMRSu2Tnn1YiVP dZ8a8pkjZ6Rr9O5TMUpdSYOufWc0D9v8Iul0mw8EqFPpCFRvqMKpfJ7JUpp6jRkt45qFgoPGPIuU wW37nptC9YJu9qF.AzkrL4VrR8CdOnucL6I5cprI3v3MYCAvfcULx3rwMMD2hFBPC8VwVXu5jLN3 BsyevDIrWz87zmfZV1RvEeWnHTPYzgd6a2PnAMlMESGF9SNRpnDKuW2w4.b1lzBApIY5lAhAcbtB eY_GMosds9LB4V6v9F8yKnlBrFAS2OINySP1y3DUzhYF9QTVEO3AfynqdAS9BWpHtS5FU9SqGDNS W4VDOT446wel4vt4UUI0TulkbUlYU3eSA3a17M1mV.NKeR4ALTar3cenHctiwkbMcjvNGznVUC6P Ty8H8OVj9PDyvXUMZKpXBNT2iNw_3Z_aSTBf7nMQ5t6cqt5dwdyXZ7pALKcztI199zl0O.B7H0RJ UUqwujMBC_J0DZcd4m6JcBzFv5Or8ICugju3S88v.RN6XzJ4F0g.nK3pCbVQKMFsmwj..vu5YNkB ExY46EKFz79091CPIy7EgDIsvxMilmE_43bGGIQL.BKhCqnB7Zcc4Qi_FwYZwHz6ukLv2nJ__QX0 g3VzfU_oslI3NToHol36wXFxT7KoIaSBc7ulLvzETg49fkZK0W.6enqqqYptMO5Ln6A6CzOMGiuR dw24u0L2STpNBivrpuDK6SGirCOvi6hchEvH5rfY9lhwmUr5oLw9XQNcrcnosnvEFAH9gysDzOp5 msrf_QXAnYzCtoeMCetZUz7gO9vC2Z.SsRh6WAvLLex25KzWPWbnjTU645qibcqXW0GVxKl.eAuq IkbSR94VKWX6DcfoljiipIMdOZehQsOyisJY2QxX5UkY1xk4vP12P8Q9QhXXwWy0g5XsmdlltGLN toBvvwVysIHZW8Raj1ii5gm0wRKAGXkEzkLU.uA0ach8CheWh0eLiAsBSfmIlPAaFWLUoya1nk46 MzNV2cze_yUbI To: kernel-hardening@lists.openwall.com Cc: thgarnie@google.com, x86@kernel.org, kstewart@linuxfoundation.org, gregkh@linuxfoundation.org, keescook@chromium.org From: Ruslan Nikolaev Subject: [PATCH v1 04/06]: The PLT stub for PIC modules Message-ID: <6a7e9f8a-75e1-c9a2-94c2-471e8d0ce85c@yahoo.com> Date: Tue, 15 Jan 2019 14:01:06 -0500 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:60.0) Gecko/20100101 Thunderbird/60.2.1 MIME-Version: 1.0 Content-Language: en-US X-Virus-Scanned: ClamAV using ClamSMTP The PLT stub for PIC modules The patch is by Hassan Nadeem and Ruslan Nikolaev. This extends the prior PIE kernel patch (by Thomas Garnier) to also support position-independent modules that can be placed anywhere in the 48/64-bit address space (for better KASLR). Signed-off-by: Ruslan Nikolaev --- Makefile | 3 ++- module-plt-stub.S | 23 +++++++++++++++++++++++ 2 files changed, 25 insertions(+), 1 deletion(-) diff -uprN a/arch/x86/kernel/Makefile b/arch/x86/kernel/Makefile --- a/arch/x86/kernel/Makefile 2019-01-15 11:20:45.271168382 -0500 +++ b/arch/x86/kernel/Makefile 2019-01-15 11:30:12.576999665 -0500 @@ -104,7 +104,8 @@ obj-$(CONFIG_KEXEC_CORE) += relocate_ker obj-$(CONFIG_KEXEC_FILE) += kexec-bzimage64.o obj-$(CONFIG_CRASH_DUMP) += crash_dump_$(BITS).o obj-y += kprobes/ -obj-$(CONFIG_MODULES) += module.o +obj-$(CONFIG_MODULES) += module.o module-plt-stub.o +OBJECT_FILES_NON_STANDARD_module-plt-stub.o := y obj-$(CONFIG_DOUBLEFAULT) += doublefault.o obj-$(CONFIG_KGDB) += kgdb.o obj-$(CONFIG_VM86) += vm86_32.o diff -uprN a/arch/x86/kernel/module-plt-stub.S b/arch/x86/kernel/module-plt-stub.S --- a/arch/x86/kernel/module-plt-stub.S 1969-12-31 19:00:00.000000000 -0500 +++ b/arch/x86/kernel/module-plt-stub.S 2019-01-15 11:30:12.580999706 -0500 @@ -0,0 +1,23 @@ +/* SPDX-License-Identifier: GPL-2.0 */ + +#include +#include +#include +#include +#include +#include +#include + +/* The following code is used for PLT generation only + and should never be executed directly. */ +.section .rodata +.globl __THUNK_FOR_PLT +.globl __THUNK_FOR_PLT_SIZE +__THUNK_FOR_PLT: +#ifdef CONFIG_RETPOLINE + movq 0(%rip), %rax + JMP_NOSPEC %rax +#else + jmpq *0(%rip) +#endif +__THUNK_FOR_PLT_SIZE: .long . - __THUNK_FOR_PLT From patchwork Tue Jan 15 19:02:13 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ruslan Nikolaev X-Patchwork-Id: 10765005 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 9FE4A6C5 for ; Tue, 15 Jan 2019 19:09:09 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 8F1182C11D for ; Tue, 15 Jan 2019 19:09:09 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 8D3302CA7B; Tue, 15 Jan 2019 19:09:09 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-5.3 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,FREEMAIL_FROM,MAILING_LIST_MULTI,RCVD_IN_DNSWL_MED autolearn=ham version=3.3.1 Received: from mother.openwall.net (mother.openwall.net [195.42.179.200]) by mail.wl.linuxfoundation.org (Postfix) with SMTP id B783B2C11D for ; Tue, 15 Jan 2019 19:09:08 +0000 (UTC) Received: (qmail 25816 invoked by uid 550); 15 Jan 2019 19:08:55 -0000 Mailing-List: contact kernel-hardening-help@lists.openwall.com; run by ezmlm Precedence: bulk List-Post: List-Help: List-Unsubscribe: List-Subscribe: List-ID: Delivered-To: mailing list kernel-hardening@lists.openwall.com Delivered-To: moderator for kernel-hardening@lists.openwall.com Received: (qmail 16029 invoked from network); 15 Jan 2019 19:02:30 -0000 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=yahoo.com; s=s2048; t=1547578938; bh=LWMJMISBialLw3fT+PFwBduDKkMqFuf7giNS+HJXFHo=; h=To:Cc:From:Subject:Date:From:Subject; b=EWSYU4/96/UipKKXc1NbDM2IgLx8D8kquFV2pNmF7CBLupmAHySpIU30mPmWxho91KV44WwjJ+akVkX2QffRk7iVDjIPlCATDDruj548/TgdGTzMuWICz7r1WxeMI+AEC0vS+2UIk0W6xgZF27iYyDWVkFsPMTOmWBHCXLcaooa/ho0CQnfI0/BeZeqpLtcXvaFq3fz1ZEjVx+QGQwOcQamNVM5VYQAGAX4HO7QXT3LrMTLymF29BhkkcQtSCEsJeUA2RfNkktjDqPi+lLl7xbCL2334KtWN7Xa27FdR07ROEDdP1QFK2j+H4se1SdcHOAD6JBG2hqMk61d04RwVnA== X-YMail-OSG: 3KRNuEIVM1khl47vJBrv_OYRmHL9emMlt0HrQjpNGd7yK.KTHkbou5pDpL.AeZR qS8tRsp6KFVcsnv4fgfbPMyE5TjTQw76no75Kd5rlfS4jEqfPHyqvL30TkkMiuJqnJO2H1_p8Fkk AgxR_Shxwj9_y1rp3YHSsZ7s0rNbj0tk.4WnIXycixwReUwfMBGEK_Yio4_ok3pGZs046knVWLTG 62xNNGU_duRYoNoJGGsTIcXawRKZkWEI63_OC1JsC.cWyPrPjGpx2q899e8sSnRttyjM66UJZtdA FcRecXEP57r6sFpJ.vMeQZYLdnihDs7XXvuFX6qiyqPo5VowIdtXzwvCIcHYxOLvDjEn741I7JX2 kZi56_gdR.h4Ey7K0FN61HRpJMKwTaaCY45q3WNaBYUacmyrnQphg6iK_ThM9Pj4gWWaVBDGkMXW RSGews3mVqMECG_j0RgCe9hOdk1vMLfyKLmojbdkjHFQdT2iY8o.H8fkxMC06dxYno09D39a0cQK 7Mcpzd9pgKhf0zJ8FwdQMnv9VwPX3W.6bYnrXxIxmDSvJGvj9cbKpUjnJVMPqwdLNqRWeEJV6kBN 8vNvK1I6.5MCqHyVruQ7J_dM7BSSH3TcuBcE2BhIvvmNrjMgQ__Ys3v0MNJxyd2zZKi2_QzxuzrG 5sNl_fH2A9sQW5Bd2aiRhdw7CzJWYO5oROc1K9Tm8ZxRl_yhjgVMoM0byogUkEKkNBDQc7pVH1JK fGAcckxjMw_YIygEjds15zHfk4tY1vW9OsaswR8hLy211wRu6XsBJGNSOx9zpzTFfMFhF5KFz4wi GxpmlNzJ755cqpR4wV436_cKLdFI1AooF.PvxJD5XCmxmO6255.MUVX8VIfk9BdUVLvjAH2kANX. Ul7Y2xJ9AWTD52a1NcHC0mYQ796212WyDznR0BdcNh8_hOXCYjPG8hmxUl8DCofrx7C5ZYMyRwtF wzHxxcrgcowb3MqzS_fUL4Cwqsh6nEcrZusy9eWZeqDZj0.D0oi2x7lqrJIpCqeXgY9Lir7OO4Xw 36yZaR9V.fQGPDwCx8dkOQzIOgwm8kZ_zV9hcp.fik.r2101DuA8UEbFVmiAn0Cwc8qJ44ac6lbY 9B39lnJd__SRe To: kernel-hardening@lists.openwall.com Cc: thgarnie@google.com, x86@kernel.org, kstewart@linuxfoundation.org, gregkh@linuxfoundation.org, keescook@chromium.org From: Ruslan Nikolaev Subject: [PATCH v1 05/06]: Retpoline thunks for PIC modules Message-ID: <851687ba-39a8-2b97-1b7f-51ab87f4b105@yahoo.com> Date: Tue, 15 Jan 2019 14:02:13 -0500 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:60.0) Gecko/20100101 Thunderbird/60.2.1 MIME-Version: 1.0 Content-Language: en-US X-Virus-Scanned: ClamAV using ClamSMTP Retpoline thunks for PIC modules The patch is by Hassan Nadeem and Ruslan Nikolaev. This extends the prior PIE kernel patch (by Thomas Garnier) to also support position-independent modules that can be placed anywhere in the 48/64-bit address space (for better KASLR). Signed-off-by: Ruslan Nikolaev --- Makefile | 3 +++ retpoline.S | 47 +++++++++++++++++++++++++++++++++++++++++++++++ 2 files changed, 50 insertions(+) diff -uprN a/arch/x86/module-lib/Makefile b/arch/x86/module-lib/Makefile --- a/arch/x86/module-lib/Makefile 1969-12-31 19:00:00.000000000 -0500 +++ b/arch/x86/module-lib/Makefile 2019-01-15 11:32:46.721911879 -0500 @@ -0,0 +1,3 @@ +# SPDX-License-Identifier: GPL-2.0 + +obj-$(CONFIG_RETPOLINE) += retpoline.o \ No newline at end of file diff -uprN a/arch/x86/module-lib/retpoline.S b/arch/x86/module-lib/retpoline.S --- a/arch/x86/module-lib/retpoline.S 1969-12-31 19:00:00.000000000 -0500 +++ b/arch/x86/module-lib/retpoline.S 2019-01-15 11:32:46.721911879 -0500 @@ -0,0 +1,47 @@ +/* SPDX-License-Identifier: GPL-2.0 */ + +#include +#include +#include +#include +#include +#include +#include + +.macro THUNK reg + .section .text.__x86.indirect_thunk + +ENTRY(__x86_indirect_thunk_\reg) + CFI_STARTPROC + JMP_NOSPEC %\reg + CFI_ENDPROC +ENDPROC(__x86_indirect_thunk_\reg) +.endm + +/* + * Despite being an assembler file we can't just use .irp here + * because __KSYM_DEPS__ only uses the C preprocessor and would + * only see one instance of "__x86_indirect_thunk_\reg" rather + * than one per register with the correct names. So we do it + * the simple and nasty way... + */ +#define GENERATE_THUNK(reg) THUNK reg + +GENERATE_THUNK(_ASM_AX) +GENERATE_THUNK(_ASM_BX) +GENERATE_THUNK(_ASM_CX) +GENERATE_THUNK(_ASM_DX) +GENERATE_THUNK(_ASM_SI) +GENERATE_THUNK(_ASM_DI) +GENERATE_THUNK(_ASM_BP) +#ifdef CONFIG_64BIT +GENERATE_THUNK(r8) +GENERATE_THUNK(r9) +GENERATE_THUNK(r10) +GENERATE_THUNK(r11) +GENERATE_THUNK(r12) +GENERATE_THUNK(r13) +GENERATE_THUNK(r14) +GENERATE_THUNK(r15) +#endif + From patchwork Tue Jan 15 19:03:13 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ruslan Nikolaev X-Patchwork-Id: 10765007 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id ED13514E5 for ; Tue, 15 Jan 2019 19:09:21 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id D6FF22D335 for ; Tue, 15 Jan 2019 19:09:21 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id CAE6D2D385; Tue, 15 Jan 2019 19:09:21 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-5.3 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,FREEMAIL_FROM,MAILING_LIST_MULTI,RCVD_IN_DNSWL_MED autolearn=ham version=3.3.1 Received: from mother.openwall.net (mother.openwall.net [195.42.179.200]) by mail.wl.linuxfoundation.org (Postfix) with SMTP id 6968C2C11D for ; Tue, 15 Jan 2019 19:09:18 +0000 (UTC) Received: (qmail 27847 invoked by uid 550); 15 Jan 2019 19:09:07 -0000 Mailing-List: contact kernel-hardening-help@lists.openwall.com; run by ezmlm Precedence: bulk List-Post: List-Help: List-Unsubscribe: List-Subscribe: List-ID: Delivered-To: mailing list kernel-hardening@lists.openwall.com Delivered-To: moderator for kernel-hardening@lists.openwall.com Received: (qmail 17613 invoked from network); 15 Jan 2019 19:03:28 -0000 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=yahoo.com; s=s2048; t=1547578996; bh=6z/bkjfQNSX24zze5AU/dT79mDV3wTk1DfG0ol5LbuE=; h=To:Cc:From:Subject:Date:From:Subject; b=oZYW0Ysl284yt5iarTWGxQ3pXvpYFFNySGATWpI7W5BKzd52wf2dV9WeELyLlnSXgZ4VMzR5wRtGcTt4f8nh3W/djoA/WTAXfKjMqRhC7g7sLssQL3gHCD5c9ylSsHW9hOX4TaLrPX6FgRzRl2OZXXr56D9IFKcQ10SDSjmW2hs5VTHrBBtnSVD5DySeBHYMgPjIYrruXTWfVT/rSLitpV/RZQc7ZZA6xhLlex/hR+pKaeAMFrB6t9mUfQ1GzGIO49Q5U25Yf6SxpdC2zvKiekPXWEFyLE6n4KvbltBLq60mE0e1xx8Sv3kd03TrMdXkn1SJskjUHZe0d2aDydMHMQ== X-YMail-OSG: pKhyiqcVM1kR.69PQmQ3u3p2oRjtiENc6ObbaZkUpxe0tXsBGuP5lbP.V4qncWH QqCaLKr_93RExtAe4Yuzt8f54BcJTy37L8NlSLR1Vm.fivCx5_EOSoGdI2imahTCZ0PUPngkWEZu 2gQUAHWzB5FkVKXB.e4SYWnJIa52wX73cWO3_qLsO60a3OsDASXeB23ni4EBQBXUjolFc1JzgzgJ 7kDqoc5dfALRDvlAMaes_r4eImeimk1Ciebvti4RmE4UjSUp6BIMifx4ZX4zwM45ccfSUUMUaSQf 4NJ_pOtFQHBEFw9OgfWnPyaxVjrgxGWONyIg5nbR1Idp6wucFWoE5Os1dyd5U5M..Pi0RSjBs3TY kpeTkTMOi3lIdxywgJmlHqbuOJwEKdRBe..msWnbYcHkn_bPllZt7CDYtCjo4fjmM.eHtDTH81nr 1hKK37dMJ4C5OdjxrDV8r9FCicwbI3xWBATqRCZxOgOIHPR4yEMO9I6sWX4j64kWaTDiKOpbcior WP4KmAJ6qgNluiwiuEubZAUmsq4Oqo.sBATQLH0vfJZlmGO3J.N7clCHY5X11afdpiaeNncnWn9E BDTOBBkNETeZDAlHTUA2cV4MxCW93dFAbvro6YnF5JN7WRx.4lH3HXfPE5id6OlkUx9zMv6_cvxz 7oce5usgY41bmMjk0zw_V2_LFole_oaHVC_pUxReaW334Yl.ptMnmvI..I5Fc3LT.L9Q0WJR596Y v3L7uX_fbz8cZX_hd19NMbRtNW0P8ogp8AdcQr0Ty5Vp4itWxHYMqQms8U6jDdr8sjP5rQlVK78I Wx9l5pbMfOAu1ww.KDZxPxR0M5EsiD5JtPTNpcS7YdgmaWXXRQsIbvQdZXkPR30NoTk2igrltaO3 OmiGZD1CkJxXU4VLedLDJ9sDMzbkL97T18O.CWf8tu9mvV7rDV9d35yAB0M5SNMTUb3ITdJD.hVu y3d1wG02yjfYVrBi_CivAREOVZn.fPbQDRZvvhSmKggYSbY3xFfBfsi5PGAAtVj0gZTLmX9Ml5BE TAoxhmmAtPI236vDEXMAH3u3.9xf_Xo_12jXL7R7yNAfCuuoJSvTpNwBrNO_EoX9ralosls.HyvJ gzaOiYAf3hyM- To: kernel-hardening@lists.openwall.com Cc: thgarnie@google.com, x86@kernel.org, kstewart@linuxfoundation.org, gregkh@linuxfoundation.org, keescook@chromium.org From: Ruslan Nikolaev Subject: [PATCH v1 06/06]: Extending kernel support for PIC modules Message-ID: Date: Tue, 15 Jan 2019 14:03:13 -0500 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:60.0) Gecko/20100101 Thunderbird/60.2.1 MIME-Version: 1.0 Content-Language: en-US X-Virus-Scanned: ClamAV using ClamSMTP Extending kernel support for PIC modules The patch is by Hassan Nadeem and Ruslan Nikolaev. This extends the prior PIE kernel patch (by Thomas Garnier) to also support position-independent modules that can be placed anywhere in the 48/64-bit address space (for better KASLR). Signed-off-by: Ruslan Nikolaev --- Makefile | 4 arch/x86/Kconfig | 12 arch/x86/Makefile | 11 arch/x86/crypto/aes-x86_64-asm_64.S | 5 arch/x86/crypto/cast5-avx-x86_64-asm_64.S | 9 arch/x86/crypto/cast6-avx-x86_64-asm_64.S | 9 arch/x86/crypto/sha1-mb/sha1_mb_mgr_flush_avx2.S | 3 arch/x86/crypto/sha1-mb/sha1_mb_mgr_submit_avx2.S | 3 arch/x86/crypto/sha256-mb/sha256_mb_mgr_flush_avx2.S | 3 arch/x86/crypto/sha256-mb/sha256_mb_mgr_submit_avx2.S | 3 arch/x86/crypto/sha512-mb/sha512_mb_mgr_flush_avx2.S | 3 arch/x86/crypto/sha512-mb/sha512_mb_mgr_submit_avx2.S | 3 arch/x86/include/asm/alternative.h | 12 arch/x86/include/asm/arch_hweight.h | 5 arch/x86/include/asm/asm.h | 60 +++- arch/x86/include/asm/elf.h | 5 arch/x86/include/asm/jump_label.h | 4 arch/x86/include/asm/kvm_host.h | 15 - arch/x86/include/asm/module.h | 26 + arch/x86/include/asm/paravirt_types.h | 9 arch/x86/include/asm/percpu.h | 2 arch/x86/include/asm/uaccess.h | 6 arch/x86/include/asm/xen/hypercall.h | 31 +- arch/x86/kernel/ftrace.c | 14 arch/x86/kernel/module.c | 263 ++++++++++++++++-- arch/x86/kernel/module.lds | 1 arch/x86/kvm/emulate.c | 1 arch/x86/tools/relocs.c | 4 scripts/Makefile.modpost | 2 scripts/recordmcount.c | 3 30 files changed, 447 insertions(+), 84 deletions(-) diff -uprN a/arch/x86/crypto/aes-x86_64-asm_64.S b/arch/x86/crypto/aes-x86_64-asm_64.S --- a/arch/x86/crypto/aes-x86_64-asm_64.S 2019-01-15 11:20:45.259168260 -0500 +++ b/arch/x86/crypto/aes-x86_64-asm_64.S 2019-01-15 11:34:00.001848665 -0500 @@ -17,6 +17,7 @@ #include #include +#include #define R1 %rax #define R1E %eax @@ -83,11 +84,11 @@ ENDPROC(FUNC); #define round_mov(tab_off, reg_i, reg_o) \ - leaq tab_off(%rip), RBASE; \ + _ASM_LEA_RIP(tab_off, RBASE); \ movl (RBASE,reg_i,4), reg_o; #define round_xor(tab_off, reg_i, reg_o) \ - leaq tab_off(%rip), RBASE; \ + _ASM_LEA_RIP(tab_off, RBASE); \ xorl (RBASE,reg_i,4), reg_o; #define round(TAB,OFFSET,r1,r2,r3,r4,r5,r6,r7,r8,ra,rb,rc,rd) \ diff -uprN a/arch/x86/crypto/cast5-avx-x86_64-asm_64.S b/arch/x86/crypto/cast5-avx-x86_64-asm_64.S --- a/arch/x86/crypto/cast5-avx-x86_64-asm_64.S 2019-01-15 11:20:45.259168260 -0500 +++ b/arch/x86/crypto/cast5-avx-x86_64-asm_64.S 2019-01-15 11:34:00.001848665 -0500 @@ -25,6 +25,7 @@ #include #include +#include .file "cast5-avx-x86_64-asm_64.S" @@ -99,17 +100,17 @@ #define lookup_32bit(src, dst, op1, op2, op3, interleave_op, il_reg) \ movzbl src ## bh, RID1d; \ - leaq s1(%rip), RID2; \ + _ASM_LEA_RIP(s1, RID2); \ movl (RID2, RID1, 4), dst ## d; \ movzbl src ## bl, RID2d; \ - leaq s2(%rip), RID1; \ + _ASM_LEA_RIP(s2, RID1); \ op1 (RID1, RID2, 4), dst ## d; \ shrq $16, src; \ movzbl src ## bh, RID1d; \ - leaq s3(%rip), RID2; \ + _ASM_LEA_RIP(s3, RID2); \ op2 (RID2, RID1, 4), dst ## d; \ movzbl src ## bl, RID2d; \ - leaq s4(%rip), RID1; \ + _ASM_LEA_RIP(s4, RID1); \ op3 (RID1, RID2, 4), dst ## d; \ interleave_op(il_reg); diff -uprN a/arch/x86/crypto/cast6-avx-x86_64-asm_64.S b/arch/x86/crypto/cast6-avx-x86_64-asm_64.S --- a/arch/x86/crypto/cast6-avx-x86_64-asm_64.S 2019-01-15 11:20:45.259168260 -0500 +++ b/arch/x86/crypto/cast6-avx-x86_64-asm_64.S 2019-01-15 11:34:00.001848665 -0500 @@ -25,6 +25,7 @@ #include #include +#include #include "glue_helper-asm-avx.S" .file "cast6-avx-x86_64-asm_64.S" @@ -99,17 +100,17 @@ #define lookup_32bit(src, dst, op1, op2, op3, interleave_op, il_reg) \ movzbl src ## bh, RID1d; \ - leaq s1(%rip), RID2; \ + _ASM_LEA_RIP(s1, RID2); \ movl (RID2, RID1, 4), dst ## d; \ movzbl src ## bl, RID2d; \ - leaq s2(%rip), RID1; \ + _ASM_LEA_RIP(s2, RID1); \ op1 (RID1, RID2, 4), dst ## d; \ shrq $16, src; \ movzbl src ## bh, RID1d; \ - leaq s3(%rip), RID2; \ + _ASM_LEA_RIP(s3, RID2); \ op2 (RID2, RID1, 4), dst ## d; \ movzbl src ## bl, RID2d; \ - leaq s4(%rip), RID1; \ + _ASM_LEA_RIP(s4, RID1); \ op3 (RID1, RID2, 4), dst ## d; \ interleave_op(il_reg); diff -uprN a/arch/x86/crypto/sha1-mb/sha1_mb_mgr_flush_avx2.S b/arch/x86/crypto/sha1-mb/sha1_mb_mgr_flush_avx2.S --- a/arch/x86/crypto/sha1-mb/sha1_mb_mgr_flush_avx2.S 2019-01-15 11:20:45.259168260 -0500 +++ b/arch/x86/crypto/sha1-mb/sha1_mb_mgr_flush_avx2.S 2019-01-15 11:34:00.001848665 -0500 @@ -53,6 +53,7 @@ */ #include #include +#include #include "sha1_mb_mgr_datastruct.S" @@ -183,7 +184,7 @@ LABEL skip_ %I # "state" and "args" are the same address, arg1 # len is arg2 - call sha1_x8_avx2 + _ASM_CALL(sha1_x8_avx2) # state and idx are intact diff -uprN a/arch/x86/crypto/sha1-mb/sha1_mb_mgr_submit_avx2.S b/arch/x86/crypto/sha1-mb/sha1_mb_mgr_submit_avx2.S --- a/arch/x86/crypto/sha1-mb/sha1_mb_mgr_submit_avx2.S 2019-01-15 11:20:45.259168260 -0500 +++ b/arch/x86/crypto/sha1-mb/sha1_mb_mgr_submit_avx2.S 2019-01-15 11:34:00.001848665 -0500 @@ -54,6 +54,7 @@ #include #include +#include #include "sha1_mb_mgr_datastruct.S" @@ -163,7 +164,7 @@ start_loop: # "state" and "args" are the same address, arg1 # len is arg2 - call sha1_x8_avx2 + _ASM_CALL(sha1_x8_avx2) # state and idx are intact diff -uprN a/arch/x86/crypto/sha256-mb/sha256_mb_mgr_flush_avx2.S b/arch/x86/crypto/sha256-mb/sha256_mb_mgr_flush_avx2.S --- a/arch/x86/crypto/sha256-mb/sha256_mb_mgr_flush_avx2.S 2019-01-15 11:20:45.259168260 -0500 +++ b/arch/x86/crypto/sha256-mb/sha256_mb_mgr_flush_avx2.S 2019-01-15 11:34:00.001848665 -0500 @@ -52,6 +52,7 @@ */ #include #include +#include #include "sha256_mb_mgr_datastruct.S" .extern sha256_x8_avx2 @@ -181,7 +182,7 @@ LABEL skip_ %I # "state" and "args" are the same address, arg1 # len is arg2 - call sha256_x8_avx2 + _ASM_CALL(sha256_x8_avx2) # state and idx are intact len_is_0: diff -uprN a/arch/x86/crypto/sha256-mb/sha256_mb_mgr_submit_avx2.S b/arch/x86/crypto/sha256-mb/sha256_mb_mgr_submit_avx2.S --- a/arch/x86/crypto/sha256-mb/sha256_mb_mgr_submit_avx2.S 2019-01-15 11:20:45.259168260 -0500 +++ b/arch/x86/crypto/sha256-mb/sha256_mb_mgr_submit_avx2.S 2019-01-15 11:34:00.001848665 -0500 @@ -53,6 +53,7 @@ #include #include +#include #include "sha256_mb_mgr_datastruct.S" .extern sha256_x8_avx2 @@ -164,7 +165,7 @@ start_loop: # "state" and "args" are the same address, arg1 # len is arg2 - call sha256_x8_avx2 + _ASM_CALL(sha256_x8_avx2) # state and idx are intact diff -uprN a/arch/x86/crypto/sha512-mb/sha512_mb_mgr_flush_avx2.S b/arch/x86/crypto/sha512-mb/sha512_mb_mgr_flush_avx2.S --- a/arch/x86/crypto/sha512-mb/sha512_mb_mgr_flush_avx2.S 2019-01-15 11:20:45.259168260 -0500 +++ b/arch/x86/crypto/sha512-mb/sha512_mb_mgr_flush_avx2.S 2019-01-15 11:34:00.005850530 -0500 @@ -53,6 +53,7 @@ #include #include +#include #include "sha512_mb_mgr_datastruct.S" .extern sha512_x4_avx2 @@ -177,7 +178,7 @@ LABEL skip_ %I # "state" and "args" are the same address, arg1 # len is arg2 - call sha512_x4_avx2 + _ASM_CALL(sha512_x4_avx2) # state and idx are intact len_is_0: diff -uprN a/arch/x86/crypto/sha512-mb/sha512_mb_mgr_submit_avx2.S b/arch/x86/crypto/sha512-mb/sha512_mb_mgr_submit_avx2.S --- a/arch/x86/crypto/sha512-mb/sha512_mb_mgr_submit_avx2.S 2019-01-15 11:20:45.259168260 -0500 +++ b/arch/x86/crypto/sha512-mb/sha512_mb_mgr_submit_avx2.S 2019-01-15 11:34:00.005850530 -0500 @@ -53,6 +53,7 @@ #include #include +#include #include "sha512_mb_mgr_datastruct.S" .extern sha512_x4_avx2 @@ -167,7 +168,7 @@ start_loop: # "state" and "args" are the same address, arg1 # len is arg2 - call sha512_x4_avx2 + _ASM_CALL(sha512_x4_avx2) # state and idx are intact len_is_0: diff -uprN a/arch/x86/include/asm/alternative.h b/arch/x86/include/asm/alternative.h --- a/arch/x86/include/asm/alternative.h 2019-01-15 11:20:45.263168301 -0500 +++ b/arch/x86/include/asm/alternative.h 2019-01-15 11:34:00.009852393 -0500 @@ -207,8 +207,8 @@ static inline int alternatives_text_rese /* Like alternative_io, but for replacing a direct call with another one. */ #define alternative_call(oldfunc, newfunc, feature, output, input...) \ - asm volatile (ALTERNATIVE("call %P[old]", "call %P[new]", feature) \ - : output : [old] "i" (oldfunc), [new] "i" (newfunc), ## input) + asm volatile (ALTERNATIVE(_ASM_CALL(%p[old]), _ASM_CALL(%p[new]), feature) \ + : output : [old] "X" (oldfunc), [new] "X" (newfunc), ## input) /* * Like alternative_call, but there are two features and respective functions. @@ -218,11 +218,11 @@ static inline int alternatives_text_rese */ #define alternative_call_2(oldfunc, newfunc1, feature1, newfunc2, feature2, \ output, input...) \ - asm volatile (ALTERNATIVE_2("call %P[old]", "call %P[new1]", feature1,\ - "call %P[new2]", feature2) \ + asm volatile (ALTERNATIVE_2(_ASM_CALL(%p[old]), _ASM_CALL(%p[new1]), feature1,\ + _ASM_CALL(%p[new2]), feature2) \ : output, ASM_CALL_CONSTRAINT \ - : [old] "i" (oldfunc), [new1] "i" (newfunc1), \ - [new2] "i" (newfunc2), ## input) + : [old] "X" (oldfunc), [new1] "X" (newfunc1), \ + [new2] "X" (newfunc2), ## input) /* * use this macro(s) if you need more than one output parameter diff -uprN a/arch/x86/include/asm/arch_hweight.h b/arch/x86/include/asm/arch_hweight.h --- a/arch/x86/include/asm/arch_hweight.h 2019-01-15 11:20:45.263168301 -0500 +++ b/arch/x86/include/asm/arch_hweight.h 2019-01-15 11:34:00.009852393 -0500 @@ -3,6 +3,7 @@ #define _ASM_X86_HWEIGHT_H #include +#include #ifdef CONFIG_64BIT /* popcnt %edi, %eax */ @@ -24,7 +25,7 @@ static __always_inline unsigned int __ar { unsigned int res; - asm (ALTERNATIVE("call __sw_hweight32", POPCNT32, X86_FEATURE_POPCNT) + asm (ALTERNATIVE(_ASM_CALL(__sw_hweight32), POPCNT32, X86_FEATURE_POPCNT) : "="REG_OUT (res) : REG_IN (w)); @@ -52,7 +53,7 @@ static __always_inline unsigned long __a { unsigned long res; - asm (ALTERNATIVE("call __sw_hweight64", POPCNT64, X86_FEATURE_POPCNT) + asm (ALTERNATIVE(_ASM_CALL(__sw_hweight64), POPCNT64, X86_FEATURE_POPCNT) : "="REG_OUT (res) : REG_IN (w)); diff -uprN a/arch/x86/include/asm/asm.h b/arch/x86/include/asm/asm.h --- a/arch/x86/include/asm/asm.h 2019-01-15 11:20:45.267168340 -0500 +++ b/arch/x86/include/asm/asm.h 2019-01-15 11:34:00.009852393 -0500 @@ -2,6 +2,42 @@ #ifndef _ASM_X86_ASM_H #define _ASM_X86_ASM_H +/* PIC modules require an indirection through GOT for + * external symbols. _ASM_CALL() for internal functions + * is optimized by replacing indirect calls with direct ones + * followed by 1-byte NOP paddings per a call site; + * Similarly, _ASM_LEA_RIP() is optimized by replacing MOV + * to LEA and is used to load symbol addresses on x86-64. + * + * If RETPOLINE is enabled, use PLT stubs instead to + * better optimize local calls. + */ +#if defined(MODULE) && defined(CONFIG_X86_PIC) +# ifdef __ASSEMBLY__ +# define _ASM_LEA_RIP(v,a) movq v##@GOTPCREL(%rip), a +# ifdef CONFIG_RETPOLINE +# define _ASM_CALL(f) call f##@PLT +# else +# define _ASM_CALL(f) call *##f##@GOTPCREL(%rip) +# endif +# else +# define _ASM_LEA_RIP(v,a) "movq " #v "@GOTPCREL(%%rip), " #a +# ifdef CONFIG_RETPOLINE +# define _ASM_CALL(f) "call " #f "@PLT" +# else +# define _ASM_CALL(f) "call *" #f "@GOTPCREL(%%rip)" +# endif +# endif +#else +# ifdef __ASSEMBLY__ +# define _ASM_CALL(f) call f +# define _ASM_LEA_RIP(v,a) leaq v##(%rip), a +# else +# define _ASM_CALL(f) "call " #f +# define _ASM_LEA_RIP(v,a) "leaq " #v "(%%rip), " #a +# endif +#endif + #ifdef __ASSEMBLY__ # define __ASM_FORM(x) x # define __ASM_FORM_RAW(x) x @@ -118,6 +154,24 @@ # define CC_OUT(c) [_cc_ ## c] "=qm" #endif +/* PLT relocations in x86_64 PIC modules are already relative. + * However, due to inconsistent GNU binutils behavior (e.g., i386), + * avoid PLT relocations in all other cases (binutils bug 23997). + */ +#if defined(MODULE) && defined(CONFIG_X86_PIC) +# ifdef __ASSEMBLY__ +# define _ASM_HANDLER(x) x##@PLT +# else +# define _ASM_HANDLER(x) x "@PLT" +# endif +#else +# ifdef __ASSEMBLY__ +# define _ASM_HANDLER(x) (x) - . +# else +# define _ASM_HANDLER(x) "(" x ") - ." +# endif +#endif + /* Exception table entry */ #ifdef __ASSEMBLY__ # define _ASM_EXTABLE_HANDLE(from, to, handler) \ @@ -125,7 +179,7 @@ .balign 4 ; \ .long (from) - . ; \ .long (to) - . ; \ - .long (handler) - . ; \ + .long _ASM_HANDLER(handler); \ .popsection # define _ASM_EXTABLE(from, to) \ @@ -171,13 +225,13 @@ .endm #else -# define _EXPAND_EXTABLE_HANDLE(x) #x +# define _EXPAND_EXTABLE_HANDLE(x) _ASM_HANDLER(#x) # define _ASM_EXTABLE_HANDLE(from, to, handler) \ " .pushsection \"__ex_table\",\"a\"\n" \ " .balign 4\n" \ " .long (" #from ") - .\n" \ " .long (" #to ") - .\n" \ - " .long (" _EXPAND_EXTABLE_HANDLE(handler) ") - .\n" \ + " .long " _EXPAND_EXTABLE_HANDLE(handler) "\n" \ " .popsection\n" # define _ASM_EXTABLE(from, to) \ diff -uprN a/arch/x86/include/asm/elf.h b/arch/x86/include/asm/elf.h --- a/arch/x86/include/asm/elf.h 2019-01-15 11:20:45.263168301 -0500 +++ b/arch/x86/include/asm/elf.h 2019-01-15 11:34:00.009852393 -0500 @@ -63,7 +63,10 @@ typedef struct user_fxsr_struct elf_fpxr #define R_X86_64_8 14 /* Direct 8 bit sign extended */ #define R_X86_64_PC8 15 /* 8 bit sign extended pc relative */ -#define R_X86_64_NUM 16 +#define R_X86_64_GOTPCRELX 41 +#define R_X86_64_REX_GOTPCRELX 42 + +#define R_X86_64_NUM 43 /* * These are used to set parameters in the core dumps. diff -uprN a/arch/x86/include/asm/jump_label.h b/arch/x86/include/asm/jump_label.h --- a/arch/x86/include/asm/jump_label.h 2019-01-15 11:20:45.267168340 -0500 +++ b/arch/x86/include/asm/jump_label.h 2019-01-15 11:34:00.009852393 -0500 @@ -37,7 +37,7 @@ static __always_inline bool arch_static_ ".byte " __stringify(STATIC_KEY_INIT_NOP) "\n\t" ".pushsection __jump_table, \"aw\" \n\t" _ASM_ALIGN "\n\t" - _ASM_PTR "1b, %l[l_yes], %P0 \n\t" + _ASM_PTR "1b, %l[l_yes], %p0 \n\t" ".popsection \n\t" : : "X" (&((char *)key)[branch]) : : l_yes); @@ -53,7 +53,7 @@ static __always_inline bool arch_static_ "2:\n\t" ".pushsection __jump_table, \"aw\" \n\t" _ASM_ALIGN "\n\t" - _ASM_PTR "1b, %l[l_yes], %P0 \n\t" + _ASM_PTR "1b, %l[l_yes], %p0 \n\t" ".popsection \n\t" : : "X" (&((char *)key)[branch]) : : l_yes); diff -uprN a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h --- a/arch/x86/include/asm/kvm_host.h 2019-01-15 11:20:45.267168340 -0500 +++ b/arch/x86/include/asm/kvm_host.h 2019-01-15 11:34:00.009852393 -0500 @@ -1394,20 +1394,31 @@ enum { */ asmlinkage void kvm_spurious_fault(void); +#if defined(MODULE) && defined(CONFIG_X86_PIC) +# define ___kvm_check_rebooting \ + "pushq %%rax \n\t" \ + "movq kvm_rebooting@GOTPCREL(%%rip), %%rax \n\t" \ + "cmpb $0, (%%rax) \n\t" \ + "popq %%rax \n\t" +#else +# define ___kvm_check_rebooting \ + "cmpb $0, kvm_rebooting" __ASM_SEL(,(%%rip)) " \n\t" +#endif + #define ____kvm_handle_fault_on_reboot(insn, cleanup_insn) \ "666: " insn "\n\t" \ "668: \n\t" \ ".pushsection .fixup, \"ax\" \n" \ "667: \n\t" \ cleanup_insn "\n\t" \ - "cmpb $0, kvm_rebooting" __ASM_SEL(,(%%rip)) " \n\t" \ + ___kvm_check_rebooting \ "jne 668b \n\t" \ __ASM_SIZE(push) "$0 \n\t" \ __ASM_SIZE(push) "%%" _ASM_AX " \n\t" \ _ASM_MOVABS " $666b, %%" _ASM_AX "\n\t" \ _ASM_MOV " %%" _ASM_AX ", " __ASM_SEL(4,8) "(%%" _ASM_SP ") \n\t" \ __ASM_SIZE(pop) "%%" _ASM_AX " \n\t" \ - "call kvm_spurious_fault \n\t" \ + _ASM_CALL(kvm_spurious_fault) " \n\t" \ ".popsection \n\t" \ _ASM_EXTABLE(666b, 667b) diff -uprN a/arch/x86/include/asm/module.h b/arch/x86/include/asm/module.h --- a/arch/x86/include/asm/module.h 2019-01-15 11:20:45.267168340 -0500 +++ b/arch/x86/include/asm/module.h 2019-01-15 11:34:00.009852393 -0500 @@ -5,13 +5,32 @@ #include #include -#ifdef CONFIG_X86_PIE +extern const char __THUNK_FOR_PLT[]; +extern const unsigned int __THUNK_FOR_PLT_SIZE; + +#define PLT_ENTRY_ALIGNMENT 16 +struct plt_entry { +#ifdef CONFIG_RETPOLINE + u8 mov_ins[3]; + u32 rel_addr; + u8 thunk[0]; +#else + u16 jmp_ins; + u32 rel_addr; +#endif +} __packed __aligned(PLT_ENTRY_ALIGNMENT); + struct mod_got_sec { struct elf64_shdr *got; int got_num_entries; int got_max_entries; }; -#endif + +struct mod_plt_sec { + struct elf64_shdr *plt; + int plt_num_entries; + int plt_max_entries; +}; struct mod_arch_specific { #ifdef CONFIG_UNWINDER_ORC @@ -19,9 +38,8 @@ struct mod_arch_specific { int *orc_unwind_ip; struct orc_entry *orc_unwind; #endif -#ifdef CONFIG_X86_PIE struct mod_got_sec core; -#endif + struct mod_plt_sec core_plt; }; #ifdef CONFIG_X86_64 diff -uprN a/arch/x86/include/asm/paravirt_types.h b/arch/x86/include/asm/paravirt_types.h --- a/arch/x86/include/asm/paravirt_types.h 2019-01-15 11:20:45.263168301 -0500 +++ b/arch/x86/include/asm/paravirt_types.h 2019-01-15 11:34:00.009852393 -0500 @@ -337,7 +337,7 @@ extern struct pv_lock_ops pv_lock_ops; #define PARAVIRT_PATCH(x) \ (offsetof(struct paravirt_patch_template, x) / sizeof(void *)) -#ifdef CONFIG_X86_PIE +#if defined(CONFIG_X86_PIE) || (defined(MODULE) && defined(CONFIG_X86_PIC)) #define paravirt_opptr_call "a" #define paravirt_opptr_type "p" #else @@ -355,7 +355,11 @@ extern struct pv_lock_ops pv_lock_ops; * Generate some code, and mark it as patchable by the * apply_paravirt() alternate instruction patcher. */ -#define _paravirt_alt(insn_string, type, clobber) \ +#if defined(MODULE) && defined(CONFIG_X86_PIC) +# define _paravirt_alt(insn_string, type, clobber) \ + insn_string "\n" +#else +# define _paravirt_alt(insn_string, type, clobber) \ "771:\n\t" insn_string "\n" "772:\n" \ ".pushsection .parainstructions,\"a\"\n" \ _ASM_ALIGN "\n" \ @@ -364,6 +368,7 @@ extern struct pv_lock_ops pv_lock_ops; " .byte 772b-771b\n" \ " .short " clobber "\n" \ ".popsection\n" +#endif /* Generate patchable code, with the default asm parameters. */ #define paravirt_alt(insn_string) \ diff -uprN a/arch/x86/include/asm/percpu.h b/arch/x86/include/asm/percpu.h --- a/arch/x86/include/asm/percpu.h 2019-01-15 11:20:45.263168301 -0500 +++ b/arch/x86/include/asm/percpu.h 2019-01-15 11:34:00.009852393 -0500 @@ -216,7 +216,7 @@ do { \ }) /* Position Independent code uses relative addresses only */ -#ifdef CONFIG_X86_PIE +#if defined(CONFIG_X86_PIE) || (defined(MODULE) && defined(CONFIG_X86_PIC)) #define __percpu_stable_arg __percpu_arg(a1) #else #define __percpu_stable_arg __percpu_arg(P1) diff -uprN a/arch/x86/include/asm/uaccess.h b/arch/x86/include/asm/uaccess.h --- a/arch/x86/include/asm/uaccess.h 2019-01-15 11:20:45.267168340 -0500 +++ b/arch/x86/include/asm/uaccess.h 2019-01-15 11:34:00.009852393 -0500 @@ -174,7 +174,7 @@ __typeof__(__builtin_choose_expr(sizeof( register __inttype(*(ptr)) __val_gu asm("%"_ASM_DX); \ __chk_user_ptr(ptr); \ might_fault(); \ - asm volatile("call __get_user_%P4" \ + asm volatile(_ASM_CALL(__get_user_%P4) \ : "=a" (__ret_gu), "=r" (__val_gu), \ ASM_CALL_CONSTRAINT \ : "0" (ptr), "i" (sizeof(*(ptr)))); \ @@ -183,7 +183,7 @@ __typeof__(__builtin_choose_expr(sizeof( }) #define __put_user_x(size, x, ptr, __ret_pu) \ - asm volatile("call __put_user_" #size : "=a" (__ret_pu) \ + asm volatile(_ASM_CALL(__put_user_##size) : "=a" (__ret_pu) \ : "0" ((typeof(*(ptr)))(x)), "c" (ptr) : "ebx") @@ -213,7 +213,7 @@ __typeof__(__builtin_choose_expr(sizeof( : : "A" (x), "r" (addr)) #define __put_user_x8(x, ptr, __ret_pu) \ - asm volatile("call __put_user_8" : "=a" (__ret_pu) \ + asm volatile(_ASM_CALL(__put_user_8) : "=a" (__ret_pu) \ : "A" ((typeof(*(ptr)))(x)), "c" (ptr) : "ebx") #else #define __put_user_asm_u64(x, ptr, retval, errret) \ diff -uprN a/arch/x86/include/asm/xen/hypercall.h b/arch/x86/include/asm/xen/hypercall.h --- a/arch/x86/include/asm/xen/hypercall.h 2019-01-15 11:20:45.267168340 -0500 +++ b/arch/x86/include/asm/xen/hypercall.h 2019-01-15 11:34:00.009852393 -0500 @@ -88,9 +88,24 @@ struct xen_dm_op_buf; extern struct { char _entry[32]; } hypercall_page[]; -#define __HYPERCALL "call hypercall_page+%c[offset]" -#define __HYPERCALL_ENTRY(x) \ +#if defined(MODULE) && defined(CONFIG_X86_PIC) +# ifdef CONFIG_RETPOLINE +# define HYPERCALL(x) long xen_hypercall_##x(void); +# include +# undef HYPERCALL +# include +# define __HYPERCALL(x) CALL_NOSPEC +# define __HYPERCALL_ENTRY(x) \ + [thunk_target] "a" (xen_hypercall_##x) +# else +# define __HYPERCALL(x) "call *xen_hypercall_" #x "@GOTPCREL(%%rip)" +# define __HYPERCALL_ENTRY(x) +# endif +#else +# define __HYPERCALL(x) "call hypercall_page+%c[offset]" +# define __HYPERCALL_ENTRY(x) \ [offset] "i" (__HYPERVISOR_##x * sizeof(hypercall_page[0])) +#endif #ifdef CONFIG_X86_32 #define __HYPERCALL_RETREG "eax" @@ -146,7 +161,7 @@ extern struct { char _entry[32]; } hyper ({ \ __HYPERCALL_DECLS; \ __HYPERCALL_0ARG(); \ - asm volatile (__HYPERCALL \ + asm volatile (__HYPERCALL(name) \ : __HYPERCALL_0PARAM \ : __HYPERCALL_ENTRY(name) \ : __HYPERCALL_CLOBBER0); \ @@ -157,7 +172,7 @@ extern struct { char _entry[32]; } hyper ({ \ __HYPERCALL_DECLS; \ __HYPERCALL_1ARG(a1); \ - asm volatile (__HYPERCALL \ + asm volatile (__HYPERCALL(name) \ : __HYPERCALL_1PARAM \ : __HYPERCALL_ENTRY(name) \ : __HYPERCALL_CLOBBER1); \ @@ -168,7 +183,7 @@ extern struct { char _entry[32]; } hyper ({ \ __HYPERCALL_DECLS; \ __HYPERCALL_2ARG(a1, a2); \ - asm volatile (__HYPERCALL \ + asm volatile (__HYPERCALL(name) \ : __HYPERCALL_2PARAM \ : __HYPERCALL_ENTRY(name) \ : __HYPERCALL_CLOBBER2); \ @@ -179,7 +194,7 @@ extern struct { char _entry[32]; } hyper ({ \ __HYPERCALL_DECLS; \ __HYPERCALL_3ARG(a1, a2, a3); \ - asm volatile (__HYPERCALL \ + asm volatile (__HYPERCALL(name) \ : __HYPERCALL_3PARAM \ : __HYPERCALL_ENTRY(name) \ : __HYPERCALL_CLOBBER3); \ @@ -190,7 +205,7 @@ extern struct { char _entry[32]; } hyper ({ \ __HYPERCALL_DECLS; \ __HYPERCALL_4ARG(a1, a2, a3, a4); \ - asm volatile (__HYPERCALL \ + asm volatile (__HYPERCALL(name) \ : __HYPERCALL_4PARAM \ : __HYPERCALL_ENTRY(name) \ : __HYPERCALL_CLOBBER4); \ @@ -201,7 +216,7 @@ extern struct { char _entry[32]; } hyper ({ \ __HYPERCALL_DECLS; \ __HYPERCALL_5ARG(a1, a2, a3, a4, a5); \ - asm volatile (__HYPERCALL \ + asm volatile (__HYPERCALL(name) \ : __HYPERCALL_5PARAM \ : __HYPERCALL_ENTRY(name) \ : __HYPERCALL_CLOBBER5); \ diff -uprN a/arch/x86/Kconfig b/arch/x86/Kconfig --- a/arch/x86/Kconfig 2019-01-15 11:20:45.259168260 -0500 +++ b/arch/x86/Kconfig 2019-01-15 11:34:00.009852393 -0500 @@ -2238,9 +2238,19 @@ config X86_PIE select DYNAMIC_MODULE_BASE select MODULE_REL_CRCS if MODVERSIONS +config X86_PIC + bool + prompt "Enable PIC modules" + depends on X86_64 + default y + select MODULE_REL_CRCS if MODVERSIONS + ---help--- + Compile position-independent modules which can + be placed anywhere in the 64-bit address space. + config RANDOMIZE_BASE_LARGE bool "Increase the randomization range of the kernel image" - depends on X86_64 && RANDOMIZE_BASE + depends on X86_64 && RANDOMIZE_BASE && X86_PIC select X86_PIE select X86_MODULE_PLTS if MODULES default n diff -uprN a/arch/x86/kernel/ftrace.c b/arch/x86/kernel/ftrace.c --- a/arch/x86/kernel/ftrace.c 2019-01-15 11:20:45.271168382 -0500 +++ b/arch/x86/kernel/ftrace.c 2019-01-15 11:34:00.009852393 -0500 @@ -144,13 +144,6 @@ ftrace_modify_initial_code(unsigned long { unsigned char replaced[MCOUNT_INSN_SIZE + 1]; - /* - * If PIE is not enabled default to the original approach to code - * modification. - */ - if (!IS_ENABLED(CONFIG_X86_PIE)) - return ftrace_modify_code_direct(ip, old_code, new_code); - ftrace_expected = old_code; /* Ensure the instructions point to a call to the GOT */ @@ -159,9 +152,12 @@ ftrace_modify_initial_code(unsigned long return -EFAULT; } + /* + * For non-PIC code, default to the original approach to code + * modification. + */ if (memcmp(replaced, got_call_preinsn, sizeof(got_call_preinsn))) { - WARN_ONCE(1, "invalid function call"); - return -EINVAL; + return ftrace_modify_code_direct(ip, old_code, new_code); } /* diff -uprN a/arch/x86/kernel/module.c b/arch/x86/kernel/module.c --- a/arch/x86/kernel/module.c 2019-01-15 11:20:45.271168382 -0500 +++ b/arch/x86/kernel/module.c 2019-01-15 11:34:00.009852393 -0500 @@ -37,6 +37,9 @@ #include #include #include +#include + +static unsigned int module_plt_size; #if 0 #define DEBUGP(fmt, ...) \ @@ -90,6 +93,12 @@ static u64 find_got_kernel_entry(Elf64_S return 0; } +#else +static u64 find_got_kernel_entry(Elf64_Sym *sym, const Elf64_Rela *rela) +{ + return 0; +} +#endif static u64 module_emit_got_entry(struct module *mod, void *loc, const Elf64_Rela *rela, Elf64_Sym *sym) @@ -111,7 +120,7 @@ static u64 module_emit_got_entry(struct * relocations are sorted, this will be the last entry we allocated. * (if one exists). */ - if (i > 0 && got[i] == got[i - 2]) { + if (i > 0 && got[i] == got[i - 1]) { ret = (u64)&got[i - 1]; } else { gotsec->got_num_entries++; @@ -119,7 +128,52 @@ static u64 module_emit_got_entry(struct ret = (u64)&got[i]; } - return ret + rela->r_addend; + return ret; +} + +static bool plt_entries_equal(const struct plt_entry *a, + const struct plt_entry *b) +{ + void *a_val, *b_val; + + a_val = (void *)a + (s64)a->rel_addr; + b_val = (void *)b + (s64)b->rel_addr; + + return a_val == b_val; +} + +static void get_plt_entry(struct plt_entry *plt_entry, struct module *mod, + void *loc, const Elf64_Rela *rela, Elf64_Sym *sym) +{ + u64 abs_val = module_emit_got_entry(mod, loc, rela, sym); + u32 rel_val = abs_val - (u64)&plt_entry->rel_addr + - sizeof(plt_entry->rel_addr); + + memcpy(plt_entry, __THUNK_FOR_PLT, __THUNK_FOR_PLT_SIZE); + plt_entry->rel_addr = rel_val; +} + +static u64 module_emit_plt_entry(struct module *mod, void *loc, + const Elf64_Rela *rela, Elf64_Sym *sym) +{ + struct mod_plt_sec *pltsec = &mod->arch.core_plt; + int i = pltsec->plt_num_entries; + void *plt = (void *)pltsec->plt->sh_addr + (u64)i * module_plt_size; + + get_plt_entry(plt, mod, loc, rela, sym); + + /* + * Check if the entry we just created is a duplicate. Given that the + * relocations are sorted, this will be the last entry we allocated. + * (if one exists). + */ + if (i > 0 && plt_entries_equal(plt, plt - module_plt_size)) + return (u64)(plt - module_plt_size); + + pltsec->plt_num_entries++; + BUG_ON(pltsec->plt_num_entries > pltsec->plt_max_entries); + + return (u64)plt; } #define cmp_3way(a,b) ((a) < (b) ? -1 : (a) > (b)) @@ -148,14 +202,17 @@ static bool duplicate_rel(const Elf64_Re return num > 0 && cmp_rela(rela + num, rela + num - 1) == 0; } -static unsigned int count_gots(Elf64_Sym *syms, Elf64_Rela *rela, int num) +static void count_gots_plts(unsigned long *num_got, unsigned long *num_plt, + Elf64_Sym *syms, Elf64_Rela *rela, int num) { - unsigned int ret = 0; Elf64_Sym *s; int i; for (i = 0; i < num; i++) { switch (ELF64_R_TYPE(rela[i].r_info)) { + case R_X86_64_PLT32: + case R_X86_64_REX_GOTPCRELX: + case R_X86_64_GOTPCRELX: case R_X86_64_GOTPCREL: s = syms + ELF64_R_SYM(rela[i].r_info); @@ -164,12 +221,133 @@ static unsigned int count_gots(Elf64_Sym * custom one for this module. */ if (!duplicate_rel(rela, i) && - !find_got_kernel_entry(s, rela + i)) - ret++; + !find_got_kernel_entry(s, rela + i)) { + (*num_got)++; + if (ELF64_R_TYPE(rela[i].r_info) == + R_X86_64_PLT32) + (*num_plt)++; + } break; } } - return ret; +} + + +/* + * call *foo@GOTPCREL(%rip) ---> call foo nop + * jmp *foo@GOTPCREL(%rip) ---> jmp foo nop + */ +static int do_relax_GOTPCRELX(Elf64_Rela *rel, void *loc) +{ + struct insn insn; + void *ins_addr = loc - 2; + + kernel_insn_init(&insn, ins_addr, MAX_INSN_SIZE); + insn_get_length(&insn); + + /* 1 byte for opcode, 1 byte for modrm, 4 bytes for m32 */ + if (insn.length != 6 || insn.opcode.value != 0xFF) + return -1; + + switch (insn.modrm.value) { + case 0x15: /* CALL */ + *(u8 *)ins_addr = 0xe8; + break; + case 0x25: /* JMP */ + *(u8 *)ins_addr = 0xe9; + break; + default: + return -1; + } + memset(ins_addr + 1, 0, 4); + *((u8 *)ins_addr + 5) = 0x90; /* NOP */ + + /* Update the relocation */ + rel->r_info &= ~ELF64_R_TYPE(~0LU); + rel->r_info |= R_X86_64_PC32; + rel->r_offset--; + + return 0; +} + + +/* + * mov foo@GOTPCREL(%rip), %reg ---> lea foo(%rip), %reg + * */ +static int do_relax_REX_GOTPCRELX(Elf64_Rela *rel, void *loc) +{ + struct insn insn; + void *ins_addr = loc - 3; + + kernel_insn_init(&insn, ins_addr, MAX_INSN_SIZE); + insn_get_length(&insn); + + /* 1 byte for REX, 1 byte for opcode, 1 byte for modrm, + * 4 bytes for m32. + */ + if (insn.length != 7) + return -1; + + /* Not the MOV instruction, could be ADD, SUB etc. */ + if (insn.opcode.value != 0x8b) + return 0; + *((u8 *)ins_addr + 1) = 0x8d; /* LEA */ + + /* Update the relocation. */ + rel->r_info &= ~ELF64_R_TYPE(~0LU); + rel->r_info |= R_X86_64_PC32; + + return 0; +} + +static int apply_relaxations(Elf_Ehdr *ehdr, Elf_Shdr *sechdrs, + struct module *mod) +{ + Elf64_Sym *syms = NULL; + int i, j; + + for (i = 0; i < ehdr->e_shnum; i++) { + if (sechdrs[i].sh_type == SHT_SYMTAB) + syms = (Elf64_Sym *)sechdrs[i].sh_addr; + } + + if (!syms) { + pr_err("%s: module symtab section missing\n", mod->name); + return -ENOEXEC; + } + + for (i = 0; i < ehdr->e_shnum; i++) { + Elf64_Rela *rels = (void *)ehdr + sechdrs[i].sh_offset; + + if (sechdrs[i].sh_type != SHT_RELA) + continue; + + for (j = 0; j < sechdrs[i].sh_size / sizeof(*rels); j++) { + Elf64_Rela *rel = &rels[j]; + Elf64_Sym *sym = &syms[ELF64_R_SYM(rel->r_info)]; + void *loc = (void *)sechdrs[sechdrs[i].sh_info].sh_addr + + rel->r_offset; + + if (sym->st_shndx != SHN_UNDEF) { + /* is local symbol */ + switch (ELF64_R_TYPE(rel->r_info)) { + case R_X86_64_GOTPCRELX: + if (do_relax_GOTPCRELX(rel, loc)) + BUG(); + break; + case R_X86_64_REX_GOTPCRELX: + if (do_relax_REX_GOTPCRELX(rel, loc)) + BUG(); + break; + case R_X86_64_GOTPCREL: + /* cannot be relaxed, ignore it */ + break; + } + } + } + } + + return 0; } /* @@ -179,19 +357,25 @@ static unsigned int count_gots(Elf64_Sym int module_frob_arch_sections(Elf_Ehdr *ehdr, Elf_Shdr *sechdrs, char *secstrings, struct module *mod) { - unsigned long gots = 0; + unsigned long num_got = 0; + unsigned long num_plt = 0; Elf_Shdr *symtab = NULL; Elf64_Sym *syms = NULL; char *strings, *name; int i; + apply_relaxations(ehdr, sechdrs, mod); + /* - * Find the empty .got section so we can expand it to store the PLT - * entries. Record the symtab address as well. + * Find the empty .got and .plt sections so we can expand it + * to store the GOT and PLT entries. + * Record the symtab address as well. */ for (i = 0; i < ehdr->e_shnum; i++) { if (!strcmp(secstrings + sechdrs[i].sh_name, ".got")) { mod->arch.core.got = sechdrs + i; + } else if (!strcmp(secstrings + sechdrs[i].sh_name, ".plt")) { + mod->arch.core_plt.plt = sechdrs + i; } else if (sechdrs[i].sh_type == SHT_SYMTAB) { symtab = sechdrs + i; syms = (Elf64_Sym *)symtab->sh_addr; @@ -202,6 +386,10 @@ int module_frob_arch_sections(Elf_Ehdr * pr_err("%s: module GOT section missing\n", mod->name); return -ENOEXEC; } + if (!mod->arch.core_plt.plt) { + pr_err("%s: module PLT section missing\n", mod->name); + return -ENOEXEC; + } if (!syms) { pr_err("%s: module symtab section missing\n", mod->name); return -ENOEXEC; @@ -217,15 +405,23 @@ int module_frob_arch_sections(Elf_Ehdr * /* sort by type, symbol index and addend */ sort(rels, numrels, sizeof(Elf64_Rela), cmp_rela, NULL); - gots += count_gots(syms, rels, numrels); + count_gots_plts(&num_got, &num_plt, syms, rels, numrels); } mod->arch.core.got->sh_type = SHT_NOBITS; mod->arch.core.got->sh_flags = SHF_ALLOC; mod->arch.core.got->sh_addralign = L1_CACHE_BYTES; - mod->arch.core.got->sh_size = (gots + 1) * sizeof(u64); + mod->arch.core.got->sh_size = (num_got + 1) * sizeof(u64); mod->arch.core.got_num_entries = 0; - mod->arch.core.got_max_entries = gots; + mod->arch.core.got_max_entries = num_got; + + module_plt_size = ALIGN(__THUNK_FOR_PLT_SIZE, PLT_ENTRY_ALIGNMENT); + mod->arch.core_plt.plt->sh_type = SHT_NOBITS; + mod->arch.core_plt.plt->sh_flags = SHF_EXECINSTR | SHF_ALLOC; + mod->arch.core_plt.plt->sh_addralign = L1_CACHE_BYTES; + mod->arch.core_plt.plt->sh_size = (num_plt + 1) * module_plt_size; + mod->arch.core_plt.plt_num_entries = 0; + mod->arch.core_plt.plt_max_entries = num_plt; /* * If a _GLOBAL_OFFSET_TABLE_ symbol exists, make it absolute for @@ -243,7 +439,6 @@ int module_frob_arch_sections(Elf_Ehdr * } return 0; } -#endif void *module_alloc(unsigned long size) { @@ -306,6 +501,23 @@ int apply_relocate(Elf32_Shdr *sechdrs, return 0; } #else /*X86_64*/ + +int check_relocation_pic_safe(Elf64_Rela *rel, Elf64_Sym *sym) +{ + bool isLocalSym = sym->st_shndx != SHN_UNDEF; + + switch (ELF64_R_TYPE(rel->r_info)) { + case R_X86_64_32: + case R_X86_64_32S: + case R_X86_64_PC32: + if (!isLocalSym) + return -1; + break; + } + + return 0; +} + int apply_relocate_add(Elf64_Shdr *sechdrs, const char *strtab, unsigned int symindex, @@ -330,6 +542,10 @@ int apply_relocate_add(Elf64_Shdr *sechd sym = (Elf64_Sym *)sechdrs[symindex].sh_addr + ELF64_R_SYM(rel[i].r_info); +#ifdef CONFIG_X86_PIC + BUG_ON(check_relocation_pic_safe(&rel[i], sym)); +#endif + DEBUGP("type %d st_value %Lx r_addend %Lx loc %Lx\n", (int)ELF64_R_TYPE(rel[i].r_info), sym->st_value, rel[i].r_addend, (u64)loc); @@ -358,21 +574,30 @@ int apply_relocate_add(Elf64_Shdr *sechd if ((s64)val != *(s32 *)loc) goto overflow; break; -#ifdef CONFIG_X86_PIE + case R_X86_64_REX_GOTPCRELX: + case R_X86_64_GOTPCRELX: case R_X86_64_GOTPCREL: - val = module_emit_got_entry(me, loc, rel + i, sym); + val = module_emit_got_entry(me, loc, rel + i, sym) + + rel[i].r_addend; /* fallthrough */ -#endif case R_X86_64_PC32: - case R_X86_64_PLT32: if (*(u32 *)loc != 0) goto invalid_relocation; val -= (u64)loc; *(u32 *)loc = val; - if (IS_ENABLED(CONFIG_X86_PIE) && + if ((IS_ENABLED(CONFIG_X86_PIE) || + IS_ENABLED(CONFIG_X86_PIC)) && (s64)val != *(s32 *)loc) goto overflow; break; + case R_X86_64_PLT32: + val = module_emit_plt_entry(me, loc, rel + i, sym) + + rel[i].r_addend; + if (*(u32 *)loc != 0) + goto invalid_relocation; + val -= (u64)loc; + *(u32 *)loc = val; + break; default: pr_err("%s: Unknown rela relocation: %llu\n", me->name, ELF64_R_TYPE(rel[i].r_info)); diff -uprN a/arch/x86/kernel/module.lds b/arch/x86/kernel/module.lds --- a/arch/x86/kernel/module.lds 2019-01-15 11:20:45.271168382 -0500 +++ b/arch/x86/kernel/module.lds 2019-01-15 11:34:00.009852393 -0500 @@ -1,3 +1,4 @@ SECTIONS { .got (NOLOAD) : { BYTE(0) } + .plt (NOLOAD) : { BYTE(0) } } diff -uprN a/arch/x86/kvm/emulate.c b/arch/x86/kvm/emulate.c --- a/arch/x86/kvm/emulate.c 2019-01-15 11:20:45.275168421 -0500 +++ b/arch/x86/kvm/emulate.c 2019-01-15 11:34:00.013854257 -0500 @@ -428,7 +428,6 @@ static int fastop(struct x86_emulate_ctx FOP_RET asm(".pushsection .fixup, \"ax\"\n" - ".global kvm_fastop_exception \n" "kvm_fastop_exception: xor %esi, %esi; ret\n" ".popsection"); diff -uprN a/arch/x86/Makefile b/arch/x86/Makefile --- a/arch/x86/Makefile 2019-01-15 11:20:45.259168260 -0500 +++ b/arch/x86/Makefile 2019-01-15 11:34:00.013854257 -0500 @@ -136,6 +136,17 @@ else KBUILD_CFLAGS += $(cflags-y) KBUILD_CFLAGS += -mno-red-zone + +ifdef CONFIG_X86_PIC + KBUILD_CFLAGS_MODULE += -fPIC -mcmodel=small -fno-stack-protector -fvisibility=hidden + ifdef CONFIG_RETPOLINE + MOD_EXTRA_LINK += $(srctree)/arch/$(SRCARCH)/module-lib/retpoline.o + else + KBUILD_CFLAGS_MODULE += -fno-plt + endif +endif + KBUILD_LDFLAGS_MODULE += -T $(srctree)/arch/x86/kernel/module.lds + ifdef CONFIG_X86_PIE KBUILD_CFLAGS += -fPIE KBUILD_LDFLAGS_MODULE += -T $(srctree)/arch/x86/kernel/module.lds diff -uprN a/arch/x86/tools/relocs.c b/arch/x86/tools/relocs.c --- a/arch/x86/tools/relocs.c 2019-01-15 11:20:45.279168462 -0500 +++ b/arch/x86/tools/relocs.c 2019-01-15 11:34:00.013854257 -0500 @@ -210,6 +210,8 @@ static const char *rel_type(unsigned typ REL_TYPE(R_X86_64_JUMP_SLOT), REL_TYPE(R_X86_64_RELATIVE), REL_TYPE(R_X86_64_GOTPCREL), + REL_TYPE(R_X86_64_REX_GOTPCRELX), + REL_TYPE(R_X86_64_GOTPCRELX), REL_TYPE(R_X86_64_32), REL_TYPE(R_X86_64_32S), REL_TYPE(R_X86_64_16), @@ -866,6 +868,8 @@ static int do_reloc64(struct section *se offset += per_cpu_load_addr; switch (r_type) { + case R_X86_64_REX_GOTPCRELX: + case R_X86_64_GOTPCRELX: case R_X86_64_GOTPCREL: case R_X86_64_NONE: /* NONE can be ignored. */ diff -uprN a/Makefile b/Makefile --- a/Makefile 2019-01-15 11:20:45.087166523 -0500 +++ b/Makefile 2019-01-15 11:34:00.013854257 -0500 @@ -1207,10 +1207,10 @@ all: modules # using awk while concatenating to the final file. PHONY += modules -modules: $(vmlinux-dirs) $(if $(KBUILD_BUILTIN),vmlinux) modules.builtin +modules: $(vmlinux-dirs) $(if $(KBUILD_BUILTIN),vmlinux) modules.builtin $(MOD_EXTRA_LINK) $(Q)$(AWK) '!x[$$0]++' $(vmlinux-dirs:%=$(objtree)/%/modules.order) > $(objtree)/modules.order @$(kecho) ' Building modules, stage 2.'; - $(Q)$(MAKE) -f $(srctree)/scripts/Makefile.modpost + $(Q)$(MAKE) -f $(srctree)/scripts/Makefile.modpost MOD_EXTRA_LINK=$(MOD_EXTRA_LINK) modules.builtin: $(vmlinux-dirs:%=%/modules.builtin) $(Q)$(AWK) '!x[$$0]++' $^ > $(objtree)/modules.builtin diff -uprN a/scripts/Makefile.modpost b/scripts/Makefile.modpost --- a/scripts/Makefile.modpost 2019-01-15 11:20:45.399169674 -0500 +++ b/scripts/Makefile.modpost 2019-01-15 11:34:00.013854257 -0500 @@ -125,7 +125,7 @@ quiet_cmd_ld_ko_o = LD [M] $@ -o $@ $(filter-out FORCE,$^) ; \ $(if $(ARCH_POSTLINK), $(MAKE) -f $(ARCH_POSTLINK) $@, true) -$(modules): %.ko :%.o %.mod.o FORCE +$(modules): %.ko :%.o %.mod.o $(MOD_EXTRA_LINK) FORCE +$(call if_changed,ld_ko_o) targets += $(modules) diff -uprN a/scripts/recordmcount.c b/scripts/recordmcount.c --- a/scripts/recordmcount.c 2019-01-15 11:20:45.399169674 -0500 +++ b/scripts/recordmcount.c 2019-01-15 11:34:00.013854257 -0500 @@ -453,7 +453,8 @@ static int make_nop_x86(void *map, size_ /* Swap the stub and nop for a got call if the binary is built with PIE */ static int is_fake_mcount_x86_x64(Elf64_Rel const *rp) { - if (ELF64_R_TYPE(rp->r_info) == R_X86_64_GOTPCREL) { + if (ELF64_R_TYPE(rp->r_info) == R_X86_64_GOTPCREL || + ELF64_R_TYPE(rp->r_info) == R_X86_64_GOTPCRELX) { ideal_nop = ideal_nop6_x86_64; ideal_nop_x86_size = sizeof(ideal_nop6_x86_64); stub_x86 = stub_got_x86;