From patchwork Sun Dec 1 01:56:11 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Andrew Morton X-Patchwork-Id: 11268411 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 81643921 for ; Sun, 1 Dec 2019 01:56:16 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 33D092082E for ; Sun, 1 Dec 2019 01:56:16 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (1024-bit key) header.d=kernel.org header.i=@kernel.org header.b="WzH4ceMg" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 33D092082E Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=linux-foundation.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 50C096B0347; Sat, 30 Nov 2019 20:56:14 -0500 (EST) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 497656B0349; Sat, 30 Nov 2019 20:56:14 -0500 (EST) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 33A266B034A; Sat, 30 Nov 2019 20:56:14 -0500 (EST) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0158.hostedemail.com [216.40.44.158]) by kanga.kvack.org (Postfix) with ESMTP id 109E46B0347 for ; Sat, 30 Nov 2019 20:56:14 -0500 (EST) Received: from smtpin05.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay02.hostedemail.com (Postfix) with SMTP id B072540DD for ; Sun, 1 Dec 2019 01:56:13 +0000 (UTC) X-FDA: 76214907426.05.ants14_3650744cf1515 X-Spam-Summary: 2,0,0,df419a0962ea0430,d41d8cd98f00b204,akpm@linux-foundation.org,:akpm@linux-foundation.org:ddstreet@ieee.org:henrywolfeburns@gmail.com::mm-commits@vger.kernel.org:shakeelb@google.com:torvalds@linux-foundation.org:vitaly.vul@sony.com:vitaly.wool@konsulko.com,RULES_HIT:41:69:327:355:379:800:960:966:967:973:988:989:1260:1263:1345:1381:1431:1437:1605:1730:1747:1777:1792:2194:2196:2198:2199:2200:2201:2393:2525:2553:2559:2567:2682:2685:2859:2897:2898:2902:2933:2937:2939:2942:2945:2947:2951:2954:3022:3138:3139:3140:3141:3142:3865:3867:3868:3871:3872:3874:3934:3936:3938:3941:3944:3947:3950:3953:3956:3959:4250:4321:4385:4605:5007:6117:6119:6261:6653:7514:7558:7576:7875:7903:8599:8603:8957:9025:9036:9121:9391:9545:9592:10004:10913:11026:11232:11473:11658:11914:12043:12048:12291:12296:12297:12438:12517:12519:12555:12679:12683:12783:12986:13146:13161:13229:13230:13846:14096:14799:21080:21094:21323:21324:21433:21451:21627:21740:21819:21939:30054:30070:30090,0,RBL:error,C acheIP:n X-HE-Tag: ants14_3650744cf1515 X-Filterd-Recvd-Size: 22816 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by imf09.hostedemail.com (Postfix) with ESMTP for ; Sun, 1 Dec 2019 01:56:13 +0000 (UTC) Received: from localhost.localdomain (c-73-231-172-41.hsd1.ca.comcast.net [73.231.172.41]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPSA id 3FDAF217AB; Sun, 1 Dec 2019 01:56:12 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1575165372; bh=LWYT7VXpewV0T705aJq8LRLoWJcGA9xmWkoUPPZcl9c=; h=Date:From:To:Subject:From; b=WzH4ceMgxX5zmfjYZ8EbZcsEyRvayjrWSAir5cmv3OvoIPH/bN/WZFZig4p8aNMgh 9aQErXH/yS6lJ6tnPMRMOeyFxhrbb60SEukLOEvSFUGb4Z5lpTR5ytbj+oreeVubYz b6lENWG5baFZBkiM/DEDt2gcIT5/B2I7iFkW0/3Y= Date: Sat, 30 Nov 2019 17:56:11 -0800 From: akpm@linux-foundation.org To: akpm@linux-foundation.org, ddstreet@ieee.org, henrywolfeburns@gmail.com, linux-mm@kvack.org, mm-commits@vger.kernel.org, shakeelb@google.com, torvalds@linux-foundation.org, vitaly.vul@sony.com, vitaly.wool@konsulko.com Subject: [patch 117/158] mm/z3fold.c: add inter-page compaction Message-ID: <20191201015611.fw2I5fblp%akpm@linux-foundation.org> User-Agent: s-nail v14.8.16 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: From: Vitaly Wool Subject: mm/z3fold.c: add inter-page compaction For each page scheduled for compaction (e. g. by z3fold_free()), try to apply inter-page compaction before running the traditional/ existing intra-page compaction. That means, if the page has only one buddy, we treat that buddy as a new object that we aim to place into an existing z3fold page. If such a page is found, that object is transferred and the old page is freed completely. The transferred object is named "foreign" and treated slightly differently thereafter. Namely, we increase "foreign handle" counter for the new page. Pages with non-zero "foreign handle" count become unmovable. This patch implements "foreign handle" detection when a handle is freed to decrement the foreign handle counter accordingly, so a page may as well become movable again as the time goes by. As a result, we almost always have exactly 3 objects per page and significantly better average compression ratio. [cai@lca.pw: fix -Wunused-but-set-variable warnings] Link: http://lkml.kernel.org/r/1570542062-29144-1-git-send-email-cai@lca.pw [vitalywool@gmail.com: avoid subtle race when freeing slots] Link: http://lkml.kernel.org/r/20191127152118.6314b99074b0626d4c5a8835@gmail.com [vitalywool@gmail.com: compact objects more accurately] Link: http://lkml.kernel.org/r/20191127152216.6ad33745a21ba71c53606acb@gmail.com [vitalywool@gmail.com: protect handle reads] Link: http://lkml.kernel.org/r/20191127152345.8059852f60947686674d726d@gmail.com Link: http://lkml.kernel.org/r/20191006041457.24113-1-vitalywool@gmail.com Signed-off-by: Vitaly Wool Cc: Dan Streetman Cc: Henry Burns Cc: Shakeel Butt Signed-off-by: Andrew Morton --- mm/z3fold.c | 375 ++++++++++++++++++++++++++++++++++++++++---------- 1 file changed, 303 insertions(+), 72 deletions(-) --- a/mm/z3fold.c~z3fold-add-inter-page-compaction +++ a/mm/z3fold.c @@ -41,6 +41,7 @@ #include #include #include +#include #include #include @@ -90,6 +91,7 @@ struct z3fold_buddy_slots { */ unsigned long slot[BUDDY_MASK + 1]; unsigned long pool; /* back link + flags */ + rwlock_t lock; }; #define HANDLE_FLAG_MASK (0x03) @@ -124,6 +126,7 @@ struct z3fold_header { unsigned short start_middle; unsigned short first_num:2; unsigned short mapped_count:2; + unsigned short foreign_handles:2; }; /** @@ -178,6 +181,19 @@ enum z3fold_page_flags { PAGE_CLAIMED, /* by either reclaim or free */ }; +/* + * handle flags, go under HANDLE_FLAG_MASK + */ +enum z3fold_handle_flags { + HANDLES_ORPHANED = 0, +}; + +/* + * Forward declarations + */ +static struct z3fold_header *__z3fold_alloc(struct z3fold_pool *, size_t, bool); +static void compact_page_work(struct work_struct *w); + /***************** * Helpers *****************/ @@ -191,8 +207,6 @@ static int size_to_chunks(size_t size) #define for_each_unbuddied_list(_iter, _begin) \ for ((_iter) = (_begin); (_iter) < NCHUNKS; (_iter)++) -static void compact_page_work(struct work_struct *w); - static inline struct z3fold_buddy_slots *alloc_slots(struct z3fold_pool *pool, gfp_t gfp) { @@ -204,6 +218,7 @@ static inline struct z3fold_buddy_slots if (slots) { memset(slots->slot, 0, sizeof(slots->slot)); slots->pool = (unsigned long)pool; + rwlock_init(&slots->lock); } return slots; @@ -219,25 +234,110 @@ static inline struct z3fold_buddy_slots return (struct z3fold_buddy_slots *)(handle & ~(SLOTS_ALIGN - 1)); } +/* Lock a z3fold page */ +static inline void z3fold_page_lock(struct z3fold_header *zhdr) +{ + spin_lock(&zhdr->page_lock); +} + +/* Try to lock a z3fold page */ +static inline int z3fold_page_trylock(struct z3fold_header *zhdr) +{ + return spin_trylock(&zhdr->page_lock); +} + +/* Unlock a z3fold page */ +static inline void z3fold_page_unlock(struct z3fold_header *zhdr) +{ + spin_unlock(&zhdr->page_lock); +} + + +static inline struct z3fold_header *__get_z3fold_header(unsigned long handle, + bool lock) +{ + struct z3fold_buddy_slots *slots; + struct z3fold_header *zhdr; + int locked = 0; + + if (!(handle & (1 << PAGE_HEADLESS))) { + slots = handle_to_slots(handle); + do { + unsigned long addr; + + read_lock(&slots->lock); + addr = *(unsigned long *)handle; + zhdr = (struct z3fold_header *)(addr & PAGE_MASK); + if (lock) + locked = z3fold_page_trylock(zhdr); + read_unlock(&slots->lock); + if (locked) + break; + cpu_relax(); + } while (lock); + } else { + zhdr = (struct z3fold_header *)(handle & PAGE_MASK); + } + + return zhdr; +} + +/* Returns the z3fold page where a given handle is stored */ +static inline struct z3fold_header *handle_to_z3fold_header(unsigned long h) +{ + return __get_z3fold_header(h, false); +} + +/* return locked z3fold page if it's not headless */ +static inline struct z3fold_header *get_z3fold_header(unsigned long h) +{ + return __get_z3fold_header(h, true); +} + +static inline void put_z3fold_header(struct z3fold_header *zhdr) +{ + struct page *page = virt_to_page(zhdr); + + if (!test_bit(PAGE_HEADLESS, &page->private)) + z3fold_page_unlock(zhdr); +} + static inline void free_handle(unsigned long handle) { struct z3fold_buddy_slots *slots; + struct z3fold_header *zhdr; int i; bool is_free; if (handle & (1 << PAGE_HEADLESS)) return; - WARN_ON(*(unsigned long *)handle == 0); - *(unsigned long *)handle = 0; + if (WARN_ON(*(unsigned long *)handle == 0)) + return; + + zhdr = handle_to_z3fold_header(handle); slots = handle_to_slots(handle); + write_lock(&slots->lock); + *(unsigned long *)handle = 0; + write_unlock(&slots->lock); + if (zhdr->slots == slots) + return; /* simple case, nothing else to do */ + + /* we are freeing a foreign handle if we are here */ + zhdr->foreign_handles--; is_free = true; + read_lock(&slots->lock); + if (!test_bit(HANDLES_ORPHANED, &slots->pool)) { + read_unlock(&slots->lock); + return; + } for (i = 0; i <= BUDDY_MASK; i++) { if (slots->slot[i]) { is_free = false; break; } } + read_unlock(&slots->lock); if (is_free) { struct z3fold_pool *pool = slots_to_pool(slots); @@ -322,6 +422,7 @@ static struct z3fold_header *init_z3fold zhdr->first_num = 0; zhdr->start_middle = 0; zhdr->cpu = -1; + zhdr->foreign_handles = 0; zhdr->slots = slots; zhdr->pool = pool; INIT_LIST_HEAD(&zhdr->buddy); @@ -341,24 +442,6 @@ static void free_z3fold_page(struct page __free_page(page); } -/* Lock a z3fold page */ -static inline void z3fold_page_lock(struct z3fold_header *zhdr) -{ - spin_lock(&zhdr->page_lock); -} - -/* Try to lock a z3fold page */ -static inline int z3fold_page_trylock(struct z3fold_header *zhdr) -{ - return spin_trylock(&zhdr->page_lock); -} - -/* Unlock a z3fold page */ -static inline void z3fold_page_unlock(struct z3fold_header *zhdr) -{ - spin_unlock(&zhdr->page_lock); -} - /* Helper function to build the index */ static inline int __idx(struct z3fold_header *zhdr, enum buddy bud) { @@ -389,7 +472,9 @@ static unsigned long __encode_handle(str if (bud == LAST) h |= (zhdr->last_chunks << BUDDY_SHIFT); + write_lock(&slots->lock); slots->slot[idx] = h; + write_unlock(&slots->lock); return (unsigned long)&slots->slot[idx]; } @@ -398,22 +483,15 @@ static unsigned long encode_handle(struc return __encode_handle(zhdr, zhdr->slots, bud); } -/* Returns the z3fold page where a given handle is stored */ -static inline struct z3fold_header *handle_to_z3fold_header(unsigned long h) -{ - unsigned long addr = h; - - if (!(addr & (1 << PAGE_HEADLESS))) - addr = *(unsigned long *)h; - - return (struct z3fold_header *)(addr & PAGE_MASK); -} - /* only for LAST bud, returns zero otherwise */ static unsigned short handle_to_chunks(unsigned long handle) { - unsigned long addr = *(unsigned long *)handle; + struct z3fold_buddy_slots *slots = handle_to_slots(handle); + unsigned long addr; + read_lock(&slots->lock); + addr = *(unsigned long *)handle; + read_unlock(&slots->lock); return (addr & ~PAGE_MASK) >> BUDDY_SHIFT; } @@ -425,10 +503,13 @@ static unsigned short handle_to_chunks(u static enum buddy handle_to_buddy(unsigned long handle) { struct z3fold_header *zhdr; + struct z3fold_buddy_slots *slots = handle_to_slots(handle); unsigned long addr; + read_lock(&slots->lock); WARN_ON(handle & (1 << PAGE_HEADLESS)); addr = *(unsigned long *)handle; + read_unlock(&slots->lock); zhdr = (struct z3fold_header *)(addr & PAGE_MASK); return (addr - zhdr->first_num) & BUDDY_MASK; } @@ -442,6 +523,8 @@ static void __release_z3fold_page(struct { struct page *page = virt_to_page(zhdr); struct z3fold_pool *pool = zhdr_to_pool(zhdr); + bool is_free = true; + int i; WARN_ON(!list_empty(&zhdr->buddy)); set_bit(PAGE_STALE, &page->private); @@ -450,8 +533,25 @@ static void __release_z3fold_page(struct if (!list_empty(&page->lru)) list_del_init(&page->lru); spin_unlock(&pool->lock); + + /* If there are no foreign handles, free the handles array */ + read_lock(&zhdr->slots->lock); + for (i = 0; i <= BUDDY_MASK; i++) { + if (zhdr->slots->slot[i]) { + is_free = false; + break; + } + } + if (!is_free) + set_bit(HANDLES_ORPHANED, &zhdr->slots->pool); + read_unlock(&zhdr->slots->lock); + + if (is_free) + kmem_cache_free(pool->c_handle, zhdr->slots); + if (locked) z3fold_page_unlock(zhdr); + spin_lock(&pool->stale_lock); list_add(&zhdr->buddy, &pool->stale); queue_work(pool->release_wq, &pool->work); @@ -479,6 +579,7 @@ static void release_z3fold_page_locked_l struct z3fold_header *zhdr = container_of(ref, struct z3fold_header, refcount); struct z3fold_pool *pool = zhdr_to_pool(zhdr); + spin_lock(&pool->lock); list_del_init(&zhdr->buddy); spin_unlock(&pool->lock); @@ -559,6 +660,119 @@ static inline void *mchunk_memmove(struc zhdr->middle_chunks << CHUNK_SHIFT); } +static inline bool buddy_single(struct z3fold_header *zhdr) +{ + return !((zhdr->first_chunks && zhdr->middle_chunks) || + (zhdr->first_chunks && zhdr->last_chunks) || + (zhdr->middle_chunks && zhdr->last_chunks)); +} + +static struct z3fold_header *compact_single_buddy(struct z3fold_header *zhdr) +{ + struct z3fold_pool *pool = zhdr_to_pool(zhdr); + void *p = zhdr; + unsigned long old_handle = 0; + size_t sz = 0; + struct z3fold_header *new_zhdr = NULL; + int first_idx = __idx(zhdr, FIRST); + int middle_idx = __idx(zhdr, MIDDLE); + int last_idx = __idx(zhdr, LAST); + unsigned short *moved_chunks = NULL; + + /* + * No need to protect slots here -- all the slots are "local" and + * the page lock is already taken + */ + if (zhdr->first_chunks && zhdr->slots->slot[first_idx]) { + p += ZHDR_SIZE_ALIGNED; + sz = zhdr->first_chunks << CHUNK_SHIFT; + old_handle = (unsigned long)&zhdr->slots->slot[first_idx]; + moved_chunks = &zhdr->first_chunks; + } else if (zhdr->middle_chunks && zhdr->slots->slot[middle_idx]) { + p += zhdr->start_middle << CHUNK_SHIFT; + sz = zhdr->middle_chunks << CHUNK_SHIFT; + old_handle = (unsigned long)&zhdr->slots->slot[middle_idx]; + moved_chunks = &zhdr->middle_chunks; + } else if (zhdr->last_chunks && zhdr->slots->slot[last_idx]) { + p += PAGE_SIZE - (zhdr->last_chunks << CHUNK_SHIFT); + sz = zhdr->last_chunks << CHUNK_SHIFT; + old_handle = (unsigned long)&zhdr->slots->slot[last_idx]; + moved_chunks = &zhdr->last_chunks; + } + + if (sz > 0) { + enum buddy new_bud = HEADLESS; + short chunks = size_to_chunks(sz); + void *q; + + new_zhdr = __z3fold_alloc(pool, sz, false); + if (!new_zhdr) + return NULL; + + if (WARN_ON(new_zhdr == zhdr)) + goto out_fail; + + if (new_zhdr->first_chunks == 0) { + if (new_zhdr->middle_chunks != 0 && + chunks >= new_zhdr->start_middle) { + new_bud = LAST; + } else { + new_bud = FIRST; + } + } else if (new_zhdr->last_chunks == 0) { + new_bud = LAST; + } else if (new_zhdr->middle_chunks == 0) { + new_bud = MIDDLE; + } + q = new_zhdr; + switch (new_bud) { + case FIRST: + new_zhdr->first_chunks = chunks; + q += ZHDR_SIZE_ALIGNED; + break; + case MIDDLE: + new_zhdr->middle_chunks = chunks; + new_zhdr->start_middle = + new_zhdr->first_chunks + ZHDR_CHUNKS; + q += new_zhdr->start_middle << CHUNK_SHIFT; + break; + case LAST: + new_zhdr->last_chunks = chunks; + q += PAGE_SIZE - (new_zhdr->last_chunks << CHUNK_SHIFT); + break; + default: + goto out_fail; + } + new_zhdr->foreign_handles++; + memcpy(q, p, sz); + write_lock(&zhdr->slots->lock); + *(unsigned long *)old_handle = (unsigned long)new_zhdr + + __idx(new_zhdr, new_bud); + if (new_bud == LAST) + *(unsigned long *)old_handle |= + (new_zhdr->last_chunks << BUDDY_SHIFT); + write_unlock(&zhdr->slots->lock); + add_to_unbuddied(pool, new_zhdr); + z3fold_page_unlock(new_zhdr); + + *moved_chunks = 0; + } + + return new_zhdr; + +out_fail: + if (new_zhdr) { + if (kref_put(&new_zhdr->refcount, release_z3fold_page_locked)) + atomic64_dec(&pool->pages_nr); + else { + add_to_unbuddied(pool, new_zhdr); + z3fold_page_unlock(new_zhdr); + } + } + return NULL; + +} + #define BIG_CHUNK_GAP 3 /* Has to be called with lock held */ static int z3fold_compact_page(struct z3fold_header *zhdr) @@ -638,6 +852,15 @@ static void do_compact_page(struct z3fol return; } + if (!zhdr->foreign_handles && buddy_single(zhdr) && + zhdr->mapped_count == 0 && compact_single_buddy(zhdr)) { + if (kref_put(&zhdr->refcount, release_z3fold_page_locked)) + atomic64_dec(&pool->pages_nr); + else + z3fold_page_unlock(zhdr); + return; + } + z3fold_compact_page(zhdr); add_to_unbuddied(pool, zhdr); z3fold_page_unlock(zhdr); @@ -690,7 +913,8 @@ lookup: spin_unlock(&pool->lock); page = virt_to_page(zhdr); - if (test_bit(NEEDS_COMPACTING, &page->private)) { + if (test_bit(NEEDS_COMPACTING, &page->private) || + test_bit(PAGE_CLAIMED, &page->private)) { z3fold_page_unlock(zhdr); zhdr = NULL; put_cpu_ptr(pool->unbuddied); @@ -734,7 +958,8 @@ lookup: spin_unlock(&pool->lock); page = virt_to_page(zhdr); - if (test_bit(NEEDS_COMPACTING, &page->private)) { + if (test_bit(NEEDS_COMPACTING, &page->private) || + test_bit(PAGE_CLAIMED, &page->private)) { z3fold_page_unlock(zhdr); zhdr = NULL; if (can_sleep) @@ -1000,7 +1225,7 @@ static void z3fold_free(struct z3fold_po enum buddy bud; bool page_claimed; - zhdr = handle_to_z3fold_header(handle); + zhdr = get_z3fold_header(handle); page = virt_to_page(zhdr); page_claimed = test_and_set_bit(PAGE_CLAIMED, &page->private); @@ -1014,6 +1239,7 @@ static void z3fold_free(struct z3fold_po spin_lock(&pool->lock); list_del(&page->lru); spin_unlock(&pool->lock); + put_z3fold_header(zhdr); free_z3fold_page(page, true); atomic64_dec(&pool->pages_nr); } @@ -1021,7 +1247,6 @@ static void z3fold_free(struct z3fold_po } /* Non-headless case */ - z3fold_page_lock(zhdr); bud = handle_to_buddy(handle); switch (bud) { @@ -1037,11 +1262,13 @@ static void z3fold_free(struct z3fold_po default: pr_err("%s: unknown bud %d\n", __func__, bud); WARN_ON(1); - z3fold_page_unlock(zhdr); + put_z3fold_header(zhdr); + clear_bit(PAGE_CLAIMED, &page->private); return; } - free_handle(handle); + if (!page_claimed) + free_handle(handle); if (kref_put(&zhdr->refcount, release_z3fold_page_locked_list)) { atomic64_dec(&pool->pages_nr); return; @@ -1053,7 +1280,7 @@ static void z3fold_free(struct z3fold_po } if (unlikely(PageIsolated(page)) || test_and_set_bit(NEEDS_COMPACTING, &page->private)) { - z3fold_page_unlock(zhdr); + put_z3fold_header(zhdr); clear_bit(PAGE_CLAIMED, &page->private); return; } @@ -1063,14 +1290,14 @@ static void z3fold_free(struct z3fold_po spin_unlock(&pool->lock); zhdr->cpu = -1; kref_get(&zhdr->refcount); - do_compact_page(zhdr, true); clear_bit(PAGE_CLAIMED, &page->private); + do_compact_page(zhdr, true); return; } kref_get(&zhdr->refcount); - queue_work_on(zhdr->cpu, pool->compact_wq, &zhdr->work); clear_bit(PAGE_CLAIMED, &page->private); - z3fold_page_unlock(zhdr); + queue_work_on(zhdr->cpu, pool->compact_wq, &zhdr->work); + put_z3fold_header(zhdr); } /** @@ -1111,11 +1338,10 @@ static void z3fold_free(struct z3fold_po */ static int z3fold_reclaim_page(struct z3fold_pool *pool, unsigned int retries) { - int i, ret = 0; + int i, ret = -1; struct z3fold_header *zhdr = NULL; struct page *page = NULL; struct list_head *pos; - struct z3fold_buddy_slots slots; unsigned long first_handle = 0, middle_handle = 0, last_handle = 0; spin_lock(&pool->lock); @@ -1153,6 +1379,12 @@ static int z3fold_reclaim_page(struct z3 zhdr = NULL; continue; /* can't evict at this point */ } + if (zhdr->foreign_handles) { + clear_bit(PAGE_CLAIMED, &page->private); + z3fold_page_unlock(zhdr); + zhdr = NULL; + continue; /* can't evict such page */ + } kref_get(&zhdr->refcount); list_del_init(&zhdr->buddy); zhdr->cpu = -1; @@ -1176,39 +1408,38 @@ static int z3fold_reclaim_page(struct z3 last_handle = 0; middle_handle = 0; if (zhdr->first_chunks) - first_handle = __encode_handle(zhdr, &slots, - FIRST); + first_handle = encode_handle(zhdr, FIRST); if (zhdr->middle_chunks) - middle_handle = __encode_handle(zhdr, &slots, - MIDDLE); + middle_handle = encode_handle(zhdr, MIDDLE); if (zhdr->last_chunks) - last_handle = __encode_handle(zhdr, &slots, - LAST); + last_handle = encode_handle(zhdr, LAST); /* * it's safe to unlock here because we hold a * reference to this page */ z3fold_page_unlock(zhdr); } else { - first_handle = __encode_handle(zhdr, &slots, HEADLESS); + first_handle = encode_handle(zhdr, HEADLESS); last_handle = middle_handle = 0; } - /* Issue the eviction callback(s) */ if (middle_handle) { ret = pool->ops->evict(pool, middle_handle); if (ret) goto next; + free_handle(middle_handle); } if (first_handle) { ret = pool->ops->evict(pool, first_handle); if (ret) goto next; + free_handle(first_handle); } if (last_handle) { ret = pool->ops->evict(pool, last_handle); if (ret) goto next; + free_handle(last_handle); } next: if (test_bit(PAGE_HEADLESS, &page->private)) { @@ -1264,14 +1495,13 @@ static void *z3fold_map(struct z3fold_po void *addr; enum buddy buddy; - zhdr = handle_to_z3fold_header(handle); + zhdr = get_z3fold_header(handle); addr = zhdr; page = virt_to_page(zhdr); if (test_bit(PAGE_HEADLESS, &page->private)) goto out; - z3fold_page_lock(zhdr); buddy = handle_to_buddy(handle); switch (buddy) { case FIRST: @@ -1293,8 +1523,8 @@ static void *z3fold_map(struct z3fold_po if (addr) zhdr->mapped_count++; - z3fold_page_unlock(zhdr); out: + put_z3fold_header(zhdr); return addr; } @@ -1309,18 +1539,17 @@ static void z3fold_unmap(struct z3fold_p struct page *page; enum buddy buddy; - zhdr = handle_to_z3fold_header(handle); + zhdr = get_z3fold_header(handle); page = virt_to_page(zhdr); if (test_bit(PAGE_HEADLESS, &page->private)) return; - z3fold_page_lock(zhdr); buddy = handle_to_buddy(handle); if (buddy == MIDDLE) clear_bit(MIDDLE_CHUNK_MAPPED, &page->private); zhdr->mapped_count--; - z3fold_page_unlock(zhdr); + put_z3fold_header(zhdr); } /** @@ -1352,19 +1581,21 @@ static bool z3fold_page_isolate(struct p test_bit(PAGE_STALE, &page->private)) goto out; + if (zhdr->mapped_count != 0 || zhdr->foreign_handles != 0) + goto out; + pool = zhdr_to_pool(zhdr); + spin_lock(&pool->lock); + if (!list_empty(&zhdr->buddy)) + list_del_init(&zhdr->buddy); + if (!list_empty(&page->lru)) + list_del_init(&page->lru); + spin_unlock(&pool->lock); + + kref_get(&zhdr->refcount); + z3fold_page_unlock(zhdr); + return true; - if (zhdr->mapped_count == 0) { - kref_get(&zhdr->refcount); - if (!list_empty(&zhdr->buddy)) - list_del_init(&zhdr->buddy); - spin_lock(&pool->lock); - if (!list_empty(&page->lru)) - list_del(&page->lru); - spin_unlock(&pool->lock); - z3fold_page_unlock(zhdr); - return true; - } out: z3fold_page_unlock(zhdr); return false; @@ -1387,7 +1618,7 @@ static int z3fold_page_migrate(struct ad if (!z3fold_page_trylock(zhdr)) { return -EAGAIN; } - if (zhdr->mapped_count != 0) { + if (zhdr->mapped_count != 0 || zhdr->foreign_handles != 0) { z3fold_page_unlock(zhdr); return -EBUSY; }