From patchwork Fri Jul 7 13:50:33 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Christoph Lameter (Ampere)" X-Patchwork-Id: 9830489 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id 351C460352 for ; Fri, 7 Jul 2017 13:50:55 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 2B9AE2869D for ; Fri, 7 Jul 2017 13:50:55 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 1EF02286A3; Fri, 7 Jul 2017 13:50:55 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-4.2 required=2.0 tests=BAYES_00, RCVD_IN_DNSWL_MED autolearn=ham version=3.3.1 Received: from mother.openwall.net (mother.openwall.net [195.42.179.200]) by mail.wl.linuxfoundation.org (Postfix) with SMTP id 4D20F286A1 for ; Fri, 7 Jul 2017 13:50:53 +0000 (UTC) Received: (qmail 13932 invoked by uid 550); 7 Jul 2017 13:50:50 -0000 Mailing-List: contact kernel-hardening-help@lists.openwall.com; run by ezmlm Precedence: bulk List-Post: List-Help: List-Unsubscribe: List-Subscribe: List-ID: Delivered-To: mailing list kernel-hardening@lists.openwall.com Received: (qmail 13903 invoked from network); 7 Jul 2017 13:50:49 -0000 Date: Fri, 7 Jul 2017 08:50:33 -0500 (CDT) From: Christoph Lameter X-X-Sender: cl@east.gentwo.org To: Kees Cook cc: Rik van Riel , Andrew Morton , Pekka Enberg , David Rientjes , Joonsoo Kim , "Paul E. McKenney" , Ingo Molnar , Josh Triplett , Andy Lutomirski , Nicolas Pitre , Tejun Heo , Daniel Mack , Sebastian Andrzej Siewior , Sergey Senozhatsky , Helge Deller , Linux-MM , Tycho Andersen , LKML , "kernel-hardening@lists.openwall.com" In-Reply-To: Message-ID: References: <20170706002718.GA102852@beast> <1499363602.26846.3.camel@redhat.com> X-CMAE-Envelope: MS4wfEY91oK/gj9U8GI62EUwDhCPLYhBRHfs51G3px6jcp4PML38gYIo1M3EIfKpEODgtqQbo6RjTz34uniYGzk6xZsifRkZlBsnrnP7WnSoD7Vi1iOGolf8 P1o9BVhHY/1LshrxAtESfJ4Cevx6HzobOd2wbkzO6POb7XoWhmblHExHCuDuRsjQkHf30j3CdARFc0UP/L//eGBpcIm2b8O2W3OGQszuDPRPxg7lYqOpPK3k 6W3yd1y9hSkp+B+phY2NL/NFB0WjFZHrEIWlg9aNJRk0t0hajZVaWYvF/vDPc3azC0OkeCLEaWJBKfIcG6IQElU7okJv3VMZVwVDLn7XYmxNGBDqKM+0rzMf xWZktHdE6mDZQz035gGjcaRmsI7UT7FhJsPj8gBG3EC4DUquLfyte9F/R0zjZ4SWVs6ISj4opIz8LH8ux8TdWv28MHeXgd52j9g5W5lSmY7VmaB43FK4k1vg uuDvaorQurWulhQPnFXfHJMxFR4+joQwp9RT+SkEjbKE6rJncHYdY48QI8tIOF6gmTmS3OQoCsJnUCfEz2Kf2NNkmVWpG0x6nMQ+rMj+WNzdHlFgUtgtA9/4 B77li8s13VvonDzrIrH1nQVWz576mtYBpsGwaUzKgZ8HAfI+NlsKGpYOqTqAv8puUMtOxCnZGu0k3/VG2NKjefSH3RPFv45eGoqxFEaPsNmerO4W5LIrv/Si Mcma/Wbi+/n3qTNVyMgyRHRDWLb/OGJtf3YUfjzuGQHZAAQWH+k9aBv/d3KGE2S/2Y1BPaP1ykM= Subject: [kernel-hardening] Re: [PATCH v3] mm: Add SLUB free list pointer obfuscation X-Virus-Scanned: ClamAV using ClamSMTP On Thu, 6 Jul 2017, Kees Cook wrote: > Right. This is about blocking the escalation of attack capability. For > slab object overflow flaws, there are mainly two exploitation methods: > adjacent allocated object overwrite and adjacent freed object > overwrite (i.e. a freelist pointer overwrite). The first attack > depends heavily on which slab cache (and therefore which structures) > has been exposed by the bug. It's a very narrow and specific attack > method. The freelist attack is entirely general purpose since it > provides a reliable way to gain arbitrary write capabilities. > Protecting against that attack greatly narrows the options for an > attacker which makes attacks more expensive to create and possibly > less reliable (and reliability is crucial to successful attacks). The simplest thing here is to vary the location of the freelist pointer. That way you cannot hit the freepointer in a deterministic way The freepointer is put at offset 0 right now. But you could put it anywhere in the object. Index: linux/mm/slub.c =================================================================== --- linux.orig/mm/slub.c +++ linux/mm/slub.c @@ -3467,7 +3467,8 @@ static int calculate_sizes(struct kmem_c */ s->offset = size; size += sizeof(void *); - } + } else + s->offset = s->size / sizeof(void *) * #ifdef CONFIG_SLUB_DEBUG if (flags & SLAB_STORE_USER)