From patchwork Sat Jan 20 21:06:04 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Dan Williams X-Patchwork-Id: 10176727 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id 289C66055D for ; Sat, 20 Jan 2018 21:15:29 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 17848205FD for ; Sat, 20 Jan 2018 21:15:29 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 0AE68206AC; Sat, 20 Jan 2018 21:15:29 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-4.2 required=2.0 tests=BAYES_00, RCVD_IN_DNSWL_MED autolearn=ham version=3.3.1 Received: from mother.openwall.net (mother.openwall.net [195.42.179.200]) by mail.wl.linuxfoundation.org (Postfix) with SMTP id B757C205FD for ; Sat, 20 Jan 2018 21:15:27 +0000 (UTC) Received: (qmail 30143 invoked by uid 550); 20 Jan 2018 21:15:24 -0000 Mailing-List: contact kernel-hardening-help@lists.openwall.com; run by ezmlm Precedence: bulk List-Post: List-Help: List-Unsubscribe: List-Subscribe: List-ID: Delivered-To: mailing list kernel-hardening@lists.openwall.com Received: (qmail 29931 invoked from network); 20 Jan 2018 21:15:22 -0000 X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.46,387,1511856000"; d="scan'208";a="11313640" From: Dan Williams To: tglx@linutronix.de Cc: Mark Rutland , linux-arch@vger.kernel.org, Kees Cook , kernel-hardening@lists.openwall.com, Peter Zijlstra , gregkh@linuxfoundation.org, Jonathan Corbet , Will Deacon , torvalds@linux-foundation.org, alan@linux.intel.com Date: Sat, 20 Jan 2018 13:06:04 -0800 Message-ID: <151648236454.34747.93245075402067564.stgit@dwillia2-desk3.amr.corp.intel.com> In-Reply-To: <151648235823.34747.15181877619346237802.stgit@dwillia2-desk3.amr.corp.intel.com> References: <151648235823.34747.15181877619346237802.stgit@dwillia2-desk3.amr.corp.intel.com> User-Agent: StGit/0.17.1-9-g687f MIME-Version: 1.0 Subject: [kernel-hardening] [PATCH v4.1 01/10] Documentation: document array_ptr X-Virus-Scanned: ClamAV using ClamSMTP From: Mark Rutland Document the rationale and usage of the new array_ptr() helper. Signed-off-by: Mark Rutland Signed-off-by: Will Deacon Cc: Dan Williams Cc: Jonathan Corbet Cc: Peter Zijlstra Reviewed-by: Kees Cook Signed-off-by: Dan Williams --- Documentation/speculation.txt | 143 +++++++++++++++++++++++++++++++++++++++++ 1 file changed, 143 insertions(+) create mode 100644 Documentation/speculation.txt diff --git a/Documentation/speculation.txt b/Documentation/speculation.txt new file mode 100644 index 000000000000..a47fbffe0dab --- /dev/null +++ b/Documentation/speculation.txt @@ -0,0 +1,143 @@ +This document explains potential effects of speculation, and how undesirable +effects can be mitigated portably using common APIs. + +=========== +Speculation +=========== + +To improve performance and minimize average latencies, many contemporary CPUs +employ speculative execution techniques such as branch prediction, performing +work which may be discarded at a later stage. + +Typically speculative execution cannot be observed from architectural state, +such as the contents of registers. However, in some cases it is possible to +observe its impact on microarchitectural state, such as the presence or +absence of data in caches. Such state may form side-channels which can be +observed to extract secret information. + +For example, in the presence of branch prediction, it is possible for bounds +checks to be ignored by code which is speculatively executed. Consider the +following code: + + int load_array(int *array, unsigned int idx) + { + if (idx >= MAX_ARRAY_ELEMS) + return 0; + else + return array[idx]; + } + +Which, on arm64, may be compiled to an assembly sequence such as: + + CMP , #MAX_ARRAY_ELEMS + B.LT less + MOV , #0 + RET + less: + LDR , [, ] + RET + +It is possible that a CPU mis-predicts the conditional branch, and +speculatively loads array[idx], even if idx >= MAX_ARRAY_ELEMS. This value +will subsequently be discarded, but the speculated load may affect +microarchitectural state which can be subsequently measured. + +More complex sequences involving multiple dependent memory accesses may result +in sensitive information being leaked. Consider the following code, building +on the prior example: + + int load_dependent_arrays(int *arr1, int *arr2, int idx) + { + int val1, val2, + + val1 = load_array(arr1, idx); + val2 = load_array(arr2, val1); + + return val2; + } + +Under speculation, the first call to load_array() may return the value of an +out-of-bounds address, while the second call will influence microarchitectural +state dependent on this value. This may provide an arbitrary read primitive. + +==================================== +Mitigating speculation side-channels +==================================== + +The kernel provides a generic API to ensure that bounds checks are respected +even under speculation. Architectures which are affected by speculation-based +side-channels are expected to implement these primitives. + +The array_ptr() helper in can be used to prevent +information from being leaked via side-channels. + +A call to array_ptr(arr, idx, sz) returns a sanitized pointer to +arr[idx] only if idx falls in the [0, sz) interval. When idx < 0 or idx > sz, +NULL is returned. Additionally, array_ptr() of an out-of-bounds pointer is +not propagated to code which is speculatively executed. + +This can be used to protect the earlier load_array() example: + + int load_array(int *array, unsigned int idx) + { + int *elem; + + elem = array_ptr(array, idx, MAX_ARRAY_ELEMS); + if (elem) + return *elem; + else + return 0; + } + +This can also be used in situations where multiple fields on a structure are +accessed: + + struct foo array[SIZE]; + int a, b; + + void do_thing(int idx) + { + struct foo *elem; + + elem = array_ptr(array, idx, SIZE); + if (elem) { + a = elem->field_a; + b = elem->field_b; + } + } + +It is imperative that the returned pointer is used. Pointers which are +generated separately are subject to a number of potential CPU and compiler +optimizations, and may still be used speculatively. For example, this means +that the following sequence is unsafe: + + struct foo array[SIZE]; + int a, b; + + void do_thing(int idx) + { + if (array_ptr(array, idx, SIZE) != NULL) { + // unsafe as wrong pointer is used + a = array[idx].field_a; + b = array[idx].field_b; + } + } + +Similarly, it is unsafe to compare the returned pointer with other pointers, +as this may permit the compiler to substitute one pointer with another, +permitting speculation. For example, the following sequence is unsafe: + + struct foo array[SIZE]; + int a, b; + + void do_thing(int idx) + { + struct foo *elem = array_ptr(array, idx, size); + + // unsafe due to pointer substitution + if (elem == &array[idx]) { + a = elem->field_a; + b = elem->field_b; + } + } +