From patchwork Tue Jan 16 18:17:01 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Christoph Lameter (Ampere)" X-Patchwork-Id: 10167843 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id 16734603B5 for ; Tue, 16 Jan 2018 18:17:33 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 1AA9525D99 for ; Tue, 16 Jan 2018 18:17:33 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 0E90A269A3; Tue, 16 Jan 2018 18:17:33 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-6.9 required=2.0 tests=BAYES_00,RCVD_IN_DNSWL_HI autolearn=unavailable version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 9F45025D99 for ; Tue, 16 Jan 2018 18:17:32 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1751388AbeAPSRI (ORCPT ); Tue, 16 Jan 2018 13:17:08 -0500 Received: from resqmta-po-11v.sys.comcast.net ([96.114.154.170]:33380 "EHLO resqmta-po-11v.sys.comcast.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751247AbeAPSRF (ORCPT ); Tue, 16 Jan 2018 13:17:05 -0500 Received: from resomta-po-12v.sys.comcast.net ([96.114.154.236]) by resqmta-po-11v.sys.comcast.net with ESMTP id bVmOeDXlIb6ygbVnFeKgjK; Tue, 16 Jan 2018 18:17:05 +0000 Received: from gentwo.org ([98.222.162.64]) by resomta-po-12v.sys.comcast.net with SMTP id bVnBeTSAZXO7AbVnCehLya; Tue, 16 Jan 2018 18:17:05 +0000 Received: by gentwo.org (Postfix, from userid 1001) id C9D081160188; Tue, 16 Jan 2018 12:17:01 -0600 (CST) Received: from localhost (localhost [127.0.0.1]) by gentwo.org (Postfix) with ESMTP id C6A4E1160132; Tue, 16 Jan 2018 12:17:01 -0600 (CST) Date: Tue, 16 Jan 2018 12:17:01 -0600 (CST) From: Christopher Lameter X-X-Sender: cl@nuc-kabylake To: Matthew Wilcox cc: Kees Cook , linux-kernel@vger.kernel.org, David Windsor , Pekka Enberg , David Rientjes , Joonsoo Kim , Andrew Morton , linux-mm@kvack.org, linux-xfs@vger.kernel.org, Linus Torvalds , Alexander Viro , Andy Lutomirski , Christoph Hellwig , "David S. Miller" , Laura Abbott , Mark Rutland , "Martin K. Petersen" , Paolo Bonzini , Christian Borntraeger , Christoffer Dall , Dave Kleikamp , Jan Kara , Luis de Bethencourt , Marc Zyngier , Rik van Riel , Matthew Garrett , linux-fsdevel@vger.kernel.org, linux-arch@vger.kernel.org, netdev@vger.kernel.org, kernel-hardening@lists.openwall.com Subject: Re: kmem_cache_attr (was Re: [PATCH 04/36] usercopy: Prepare for usercopy whitelisting) In-Reply-To: Message-ID: References: <1515531365-37423-1-git-send-email-keescook@chromium.org> <1515531365-37423-5-git-send-email-keescook@chromium.org> <20180114230719.GB32027@bombadil.infradead.org> <20180116160525.GF30073@bombadil.infradead.org> <20180116174315.GA10461@bombadil.infradead.org> User-Agent: Alpine 2.20 (DEB 67 2015-01-07) MIME-Version: 1.0 X-CMAE-Envelope: MS4wfGJCegtbZTXlIzRMPEFj4SQv46lIUlS0GOp8jug7nI3FkxUe55oiyuSoAclNYXVyQS/47P3jqh02nCa/05uaYA60I4gDH/lMd0T8Y1nsLq3rDKUl4h12 9xbbAMt1ENtnpHmxzxxhiHui1Agu/CzXLpRUjc4S3F5I6TzV5TZ6+dANxz2YjG7JDTRmAYyUquob6qDR1MWxm6GQX1AFfIgKmAqvG3FgrMQkGoiP3P7Z1i9V ypoEYwP1skc5Mtik+7vU+HjvBYncAF+vKW41mKQWNg9G3DbMsv+Z8FouQeDRt8izWTzD1vfGpJYGzdb/Prx7ZGzfvPv2DNO+6dzkXm2T3PG/IhL8cSV+o0Te xPcPrbe9XR3h+b7sCyvihfjpFxy8hnH7AgkiI20WUEloDdbpv5+oj6xL2CEleqwP5o41ZeRVV62C6HKjRRcmxsGpXrUhgHIUUnqCPceeWLR3DqAsrnHPT0jo Ic0JKYYT8k5gmJV8e//irr7L5DI4r+YktHDcy21zD+Z9izpN/iOFn74qP8pVztzF9LdZCElQSiFr5XbldvwLHzpWAInBOhmW6Yk/pkG/U9QPncgoUqzwf+DB HCPoqCoqJ87roiUgUAHX0rl98oyZkbVV5QB5s1K4IbJ7E10u9Oubj3Bgia5CilF9GPQbyI8aFAODpLP8kLZqchfdchrS8tgEAmrYD4EP+xBfb1RtDMis3yAY VsI0RbOOf+ATyPnGBIqzTwEe0HxpQ8C5Ex5k8p7jZMSINRPQ3gAzBYGZRRzBiiaPILAfDuU2r0+/mQv66VjUXH5mpV/ejgq5i8UmIs8l3u1Kse/iCBj1AI/8 3fEaEoiJOpLmz2ZclWDwDs6uWl1wMg0RUfUE9XQrYqk4ntuwO9FDhrXA91cFrsC+s5dG4uEmTTYvimmT5I5Z8WgHR0gqw8C+2Orbl2GpCk0KJdWBSXrDM0SV au7QJmAaXxFSfvUHIS1gRsIYWB7ZFxGhZIoMpepHXP+fgh+7ZmpELevGpHGy6czdAUarUXpRAc8HSEaudIf3nOim96aBsNg++K6UAni6gJOXoYnl90PPKf/s ij10mf/14wAAMf/M57gdtdkDP8wVhBBi6yYEkg20jON8yCbB3SmcIZAXrFTxcdzr8cKPw9KVFWL/gzPP1BsYQpNRsZo2Hs3F+8w= Sender: linux-fsdevel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-fsdevel@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP Draft patch of how the data structs could change. kmem_cache_attr is read only. Index: linux/include/linux/slab.h =================================================================== --- linux.orig/include/linux/slab.h +++ linux/include/linux/slab.h @@ -135,9 +135,17 @@ struct mem_cgroup; void __init kmem_cache_init(void); bool slab_is_available(void); -struct kmem_cache *kmem_cache_create(const char *, size_t, size_t, - slab_flags_t, - void (*)(void *)); +typedef kmem_cache_ctor void (*ctor)(void *); + +struct kmem_cache_attr { + char name[16]; + unsigned int size; + unsigned int align; + slab_flags_t flags; + kmem_cache_ctor ctor; +} + +struct kmem_cache *kmem_cache_create(const kmem_cache_attr *); void kmem_cache_destroy(struct kmem_cache *); int kmem_cache_shrink(struct kmem_cache *); Index: linux/include/linux/slab_def.h =================================================================== --- linux.orig/include/linux/slab_def.h +++ linux/include/linux/slab_def.h @@ -10,6 +10,7 @@ struct kmem_cache { struct array_cache __percpu *cpu_cache; + struct kmem_cache_attr *attr; /* 1) Cache tunables. Protected by slab_mutex */ unsigned int batchcount; @@ -35,14 +36,9 @@ struct kmem_cache { struct kmem_cache *freelist_cache; unsigned int freelist_size; - /* constructor func */ - void (*ctor)(void *obj); - /* 4) cache creation/removal */ - const char *name; struct list_head list; int refcount; - int object_size; int align; /* 5) statistics */ Index: linux/include/linux/slub_def.h =================================================================== --- linux.orig/include/linux/slub_def.h +++ linux/include/linux/slub_def.h @@ -83,9 +83,9 @@ struct kmem_cache { struct kmem_cache_cpu __percpu *cpu_slab; /* Used for retriving partial slabs etc */ slab_flags_t flags; + struct kmem_cache_attr *attr; unsigned long min_partial; int size; /* The size of an object including meta data */ - int object_size; /* The size of an object without meta data */ int offset; /* Free pointer offset. */ #ifdef CONFIG_SLUB_CPU_PARTIAL int cpu_partial; /* Number of per cpu partial objects to keep around */ @@ -97,12 +97,10 @@ struct kmem_cache { struct kmem_cache_order_objects min; gfp_t allocflags; /* gfp flags to use on each alloc */ int refcount; /* Refcount for slab cache destroy */ - void (*ctor)(void *); int inuse; /* Offset to metadata */ int align; /* Alignment */ int reserved; /* Reserved bytes at the end of slabs */ int red_left_pad; /* Left redzone padding size */ - const char *name; /* Name (only for display!) */ struct list_head list; /* List of slab caches */ #ifdef CONFIG_SYSFS struct kobject kobj; /* For sysfs */ Index: linux/mm/slab.h =================================================================== --- linux.orig/mm/slab.h +++ linux/mm/slab.h @@ -18,13 +18,11 @@ * SLUB is no longer needed. */ struct kmem_cache { - unsigned int object_size;/* The original size of the object */ + struct kmem_cache_attr *attr unsigned int size; /* The aligned/padded/added on size */ unsigned int align; /* Alignment as calculated */ slab_flags_t flags; /* Active flags on the slab */ - const char *name; /* Slab name for sysfs */ int refcount; /* Use counter */ - void (*ctor)(void *); /* Called on object slot creation */ struct list_head list; /* List of all slab caches on the system */ };