Message ID | 1505940337-79069-4-git-send-email-keescook@chromium.org (mailing list archive) |
---|---|
State | New, archived |
Headers | show |
On Wed, 20 Sep 2017, Kees Cook wrote: > --- a/mm/slab.c > +++ b/mm/slab.c > @@ -1291,7 +1291,8 @@ void __init kmem_cache_init(void) > */ > kmalloc_caches[INDEX_NODE] = create_kmalloc_cache( > kmalloc_info[INDEX_NODE].name, > - kmalloc_size(INDEX_NODE), ARCH_KMALLOC_FLAGS); > + kmalloc_size(INDEX_NODE), ARCH_KMALLOC_FLAGS, > + 0, kmalloc_size(INDEX_NODE)); > slab_state = PARTIAL_NODE; > setup_kmalloc_cache_index_table(); Ok this presumes that at some point we will be able to restrict the number of bytes writeable and thus set the offset and size field to different values. Is that realistic? We already whitelist all kmalloc caches (see first patch). So what is the point of this patch?
On Thu, Sep 21, 2017 at 8:27 AM, Christopher Lameter <cl@linux.com> wrote: > On Wed, 20 Sep 2017, Kees Cook wrote: > >> --- a/mm/slab.c >> +++ b/mm/slab.c >> @@ -1291,7 +1291,8 @@ void __init kmem_cache_init(void) >> */ >> kmalloc_caches[INDEX_NODE] = create_kmalloc_cache( >> kmalloc_info[INDEX_NODE].name, >> - kmalloc_size(INDEX_NODE), ARCH_KMALLOC_FLAGS); >> + kmalloc_size(INDEX_NODE), ARCH_KMALLOC_FLAGS, >> + 0, kmalloc_size(INDEX_NODE)); >> slab_state = PARTIAL_NODE; >> setup_kmalloc_cache_index_table(); > > Ok this presumes that at some point we will be able to restrict the number > of bytes writeable and thus set the offset and size field to different > values. Is that realistic? > > We already whitelist all kmalloc caches (see first patch). > > So what is the point of this patch? The DMA kmalloc caches are not whitelisted: >> kmalloc_dma_caches[i] = create_kmalloc_cache(n, >> - size, SLAB_CACHE_DMA | flags); >> + size, SLAB_CACHE_DMA | flags, 0, 0); So this is creating the distinction between the kmallocs that go to userspace and those that don't. The expectation is that future work can start to distinguish between "for userspace" and "only kernel" kmalloc allocations, as is already done here for DMA. -Kees
On Thu, 21 Sep 2017, Kees Cook wrote: > > So what is the point of this patch? > > The DMA kmalloc caches are not whitelisted: The DMA kmalloc caches are pretty obsolete and mostly there for obscure drivers. ?? > >> kmalloc_dma_caches[i] = create_kmalloc_cache(n, > >> - size, SLAB_CACHE_DMA | flags); > >> + size, SLAB_CACHE_DMA | flags, 0, 0); > > So this is creating the distinction between the kmallocs that go to > userspace and those that don't. The expectation is that future work > can start to distinguish between "for userspace" and "only kernel" > kmalloc allocations, as is already done here for DMA. The creation of the kmalloc caches in earlier patches already setup the "whitelisting". Why do it twice?
On Thu, Sep 21, 2017 at 9:04 AM, Christopher Lameter <cl@linux.com> wrote: > On Thu, 21 Sep 2017, Kees Cook wrote: > >> > So what is the point of this patch? >> >> The DMA kmalloc caches are not whitelisted: > > The DMA kmalloc caches are pretty obsolete and mostly there for obscure > drivers. > > ?? They may be obsolete, but they're still in the kernel, and they aren't copied to userspace, so we can mark them. >> >> kmalloc_dma_caches[i] = create_kmalloc_cache(n, >> >> - size, SLAB_CACHE_DMA | flags); >> >> + size, SLAB_CACHE_DMA | flags, 0, 0); >> >> So this is creating the distinction between the kmallocs that go to >> userspace and those that don't. The expectation is that future work >> can start to distinguish between "for userspace" and "only kernel" >> kmalloc allocations, as is already done here for DMA. > > The creation of the kmalloc caches in earlier patches already setup the > "whitelisting". Why do it twice? Patch 1 is to allow for things to mark their whitelists. Patch 30 disables the full whitelisting, since then we've defined them all, so the kmalloc caches need to mark themselves as whitelisted. Patch 1 leaves unmarked things whitelisted so we can progressively tighten the restriction and have a bisectable series. (i.e. if there is something wrong with one of the whitelists in the series, it will bisect to that one, not the one that removes the global whitelist from patch 1.) -Kees
diff --git a/mm/slab.c b/mm/slab.c index df268999cf02..9af16f675927 100644 --- a/mm/slab.c +++ b/mm/slab.c @@ -1291,7 +1291,8 @@ void __init kmem_cache_init(void) */ kmalloc_caches[INDEX_NODE] = create_kmalloc_cache( kmalloc_info[INDEX_NODE].name, - kmalloc_size(INDEX_NODE), ARCH_KMALLOC_FLAGS); + kmalloc_size(INDEX_NODE), ARCH_KMALLOC_FLAGS, + 0, kmalloc_size(INDEX_NODE)); slab_state = PARTIAL_NODE; setup_kmalloc_cache_index_table(); diff --git a/mm/slab.h b/mm/slab.h index 044755ff9632..2e0fe357d777 100644 --- a/mm/slab.h +++ b/mm/slab.h @@ -97,7 +97,8 @@ struct kmem_cache *kmalloc_slab(size_t, gfp_t); extern int __kmem_cache_create(struct kmem_cache *, unsigned long flags); extern struct kmem_cache *create_kmalloc_cache(const char *name, size_t size, - unsigned long flags); + unsigned long flags, size_t useroffset, + size_t usersize); extern void create_boot_cache(struct kmem_cache *, const char *name, size_t size, unsigned long flags, size_t useroffset, size_t usersize); diff --git a/mm/slab_common.c b/mm/slab_common.c index 36408f5f2a34..d4e6442f9bbc 100644 --- a/mm/slab_common.c +++ b/mm/slab_common.c @@ -920,14 +920,15 @@ void __init create_boot_cache(struct kmem_cache *s, const char *name, size_t siz } struct kmem_cache *__init create_kmalloc_cache(const char *name, size_t size, - unsigned long flags) + unsigned long flags, size_t useroffset, + size_t usersize) { struct kmem_cache *s = kmem_cache_zalloc(kmem_cache, GFP_NOWAIT); if (!s) panic("Out of memory when creating slab %s\n", name); - create_boot_cache(s, name, size, flags, 0, size); + create_boot_cache(s, name, size, flags, useroffset, usersize); list_add(&s->list, &slab_caches); memcg_link_cache(s); s->refcount = 1; @@ -1081,7 +1082,8 @@ void __init setup_kmalloc_cache_index_table(void) static void __init new_kmalloc_cache(int idx, unsigned long flags) { kmalloc_caches[idx] = create_kmalloc_cache(kmalloc_info[idx].name, - kmalloc_info[idx].size, flags); + kmalloc_info[idx].size, flags, 0, + kmalloc_info[idx].size); } /* @@ -1122,7 +1124,7 @@ void __init create_kmalloc_caches(unsigned long flags) BUG_ON(!n); kmalloc_dma_caches[i] = create_kmalloc_cache(n, - size, SLAB_CACHE_DMA | flags); + size, SLAB_CACHE_DMA | flags, 0, 0); } } #endif