Message ID | 20230126215125.4069751-12-kbusch@meta.com (mailing list archive) |
---|---|
State | New |
Headers | show |
Series | dmapool enhancements | expand |
On Thu, Jan 26, 2023 at 9:55 PM Keith Busch <kbusch@meta.com> wrote: > > From: Keith Busch <kbusch@kernel.org> > > The allocated dmapool pages are never freed for the lifetime of the > pool. There is no need for the two level list+stack lookup for finding a > free block since nothing is ever removed from the list. Just use a > simple stack, reducing time complexity to constant. > > The implementation inserts the stack linking elements and the dma handle > of the block within itself when freed. This means the smallest possible > dmapool block is increased to at most 16 bytes to accomodate these > fields, but there are no exisiting users requesting a dma pool smaller > than that anyway. > > Removing the list has a significant change in performance. Using the > kernel's micro-benchmarking self test: > > Before: > > # modprobe dmapool_test > dmapool test: size:16 blocks:8192 time:57282 > dmapool test: size:64 blocks:8192 time:172562 > dmapool test: size:256 blocks:8192 time:789247 > dmapool test: size:1024 blocks:2048 time:371823 > dmapool test: size:4096 blocks:1024 time:362237 > > After: > > # modprobe dmapool_test > dmapool test: size:16 blocks:8192 time:24997 > dmapool test: size:64 blocks:8192 time:26584 > dmapool test: size:256 blocks:8192 time:33542 > dmapool test: size:1024 blocks:2048 time:9022 > dmapool test: size:4096 blocks:1024 time:6045 > > The module test allocates quite a few blocks that may not accurately > represent how these pools are used in real life. For a more marco level > benchmark, running fio high-depth + high-batched on nvme, this patch > shows submission and completion latency reduced by ~100usec each, 1% > IOPs improvement, and perf record's time spent in dma_pool_alloc/free > were reduced by half. > > Reviewed-by: Christoph Hellwig <hch@lst.de> > Signed-off-by: Keith Busch <kbusch@kernel.org> So. Somehow this commit has broken USB device mode for me with the Chipidea IP on msm8916 and msm8939. Bisecting down I find this is the inflection point commit ced6d06a81fb69e2f625b0c4b272b687a3789faa (HEAD -> usb-test-delete) Author: Keith Busch <kbusch@kernel.org> Date: Thu Jan 26 13:51:24 2023 -0800 Host side sees [128418.779220] usb 5-1.3: New USB device found, idVendor=18d1, idProduct=d00d, bcdDevice= 1.00 [128418.779225] usb 5-1.3: New USB device strings: Mfr=1, Product=2, SerialNumber=3 [128418.779227] usb 5-1.3: Product: Android [128418.779228] usb 5-1.3: Manufacturer: Google [128418.779229] usb 5-1.3: SerialNumber: 1628e0d7 [128432.387235] usb 5-1.3: USB disconnect, device number 88 [128510.296291] usb 5-1.3: new full-speed USB device number 89 using xhci_hcd [128525.812946] usb 5-1.3: device descriptor read/64, error -110 [128541.382920] usb 5-1.3: device descriptor read/64, error -110 The commit immediately prior is fine commit c1e5fc194960aa3d3daa4f102a29e962f25a64d1 Author: Keith Busch <kbusch@kernel.org> Date: Thu Jan 26 13:51:23 2023 -0800 dmapool: don't memset on free twice [128750.414739] usb 5-1.3: New USB device found, idVendor=18d1, idProduct=d00d, bcdDevice= 1.00 [128750.414745] usb 5-1.3: New USB device strings: Mfr=1, Product=2, SerialNumber=3 [128750.414746] usb 5-1.3: Product: Android [128750.414747] usb 5-1.3: Manufacturer: Google [128750.414748] usb 5-1.3: SerialNumber: 1628e0d7 [128764.035758] usb 5-1.3: USB disconnect, device number 91 [128788.305767] usb 5-1.3: new full-speed USB device number 92 using xhci_hcd [128788.406795] usb 5-1.3: not running at top speed; connect to a high speed hub [128788.427793] usb 5-1.3: New USB device found, idVendor=0525, idProduct=a4a2, bcdDevice= 6.02 [128788.427798] usb 5-1.3: New USB device strings: Mfr=1, Product=2, SerialNumber=0 [128788.427799] usb 5-1.3: Product: RNDIS/Ethernet Gadget [128788.427801] usb 5-1.3: Manufacturer: Linux 6.2.0-rc4-00517-gc1e5fc194960-dirty with ci_hdrc_msm [128788.490939] cdc_ether 5-1.3:1.0 usb0: register 'cdc_ether' at usb-0000:31:00.3-1.3, CDC Ethernet Device, 36:0e:12:58:48:ec --- bod
On Wed, Feb 01, 2023 at 05:42:04PM +0000, Bryan O'Donoghue wrote: > On Thu, Jan 26, 2023 at 9:55 PM Keith Busch <kbusch@meta.com> wrote: > So. > > Somehow this commit has broken USB device mode for me with the > Chipidea IP on msm8916 and msm8939. > > Bisecting down I find this is the inflection point > > commit ced6d06a81fb69e2f625b0c4b272b687a3789faa (HEAD -> usb-test-delete) Thanks for the report. I'll look into this immediately.
On 01/02/2023 17:43, Keith Busch wrote: > On Wed, Feb 01, 2023 at 05:42:04PM +0000, Bryan O'Donoghue wrote: >> On Thu, Jan 26, 2023 at 9:55 PM Keith Busch <kbusch@meta.com> wrote: >> So. >> >> Somehow this commit has broken USB device mode for me with the >> Chipidea IP on msm8916 and msm8939. >> >> Bisecting down I find this is the inflection point >> >> commit ced6d06a81fb69e2f625b0c4b272b687a3789faa (HEAD -> usb-test-delete) > > Thanks for the report. I'll look into this immediately. Just to confirm if I revert that patch on the tip of my working tree USB device works again Here's a dirty working tree https://git.codelinaro.org/bryan.odonoghue/kernel/-/commits/linux-next-23-02-01-msm8939-nocpr --- bod
Hi, On Thu, Jan 26, 2023 at 01:51:24PM -0800, Keith Busch wrote: > From: Keith Busch <kbusch@kernel.org> > > The allocated dmapool pages are never freed for the lifetime of the > pool. There is no need for the two level list+stack lookup for finding a > free block since nothing is ever removed from the list. Just use a > simple stack, reducing time complexity to constant. > > The implementation inserts the stack linking elements and the dma handle > of the block within itself when freed. This means the smallest possible > dmapool block is increased to at most 16 bytes to accomodate these > fields, but there are no exisiting users requesting a dma pool smaller > than that anyway. > > Removing the list has a significant change in performance. Using the > kernel's micro-benchmarking self test: > > Before: > > # modprobe dmapool_test > dmapool test: size:16 blocks:8192 time:57282 > dmapool test: size:64 blocks:8192 time:172562 > dmapool test: size:256 blocks:8192 time:789247 > dmapool test: size:1024 blocks:2048 time:371823 > dmapool test: size:4096 blocks:1024 time:362237 > > After: > > # modprobe dmapool_test > dmapool test: size:16 blocks:8192 time:24997 > dmapool test: size:64 blocks:8192 time:26584 > dmapool test: size:256 blocks:8192 time:33542 > dmapool test: size:1024 blocks:2048 time:9022 > dmapool test: size:4096 blocks:1024 time:6045 > > The module test allocates quite a few blocks that may not accurately > represent how these pools are used in real life. For a more marco level > benchmark, running fio high-depth + high-batched on nvme, this patch > shows submission and completion latency reduced by ~100usec each, 1% > IOPs improvement, and perf record's time spent in dma_pool_alloc/free > were reduced by half. > With this patch in linux-next, I see a boot failure when trying to boot a powernv qemu emulation from the SCSI MEGASAS controller. Qemu command line is qemu-system-ppc64 -M powernv -cpu POWER9 -m 2G \ -kernel arch/powerpc/boot/zImage.epapr \ -snapshot \ -device megasas,id=scsi,bus=pcie.0 -device scsi-hd,bus=scsi.0,drive=d0 \ -drive file=rootfs-el.ext2,format=raw,if=none,id=d0 \ -device i82557a,netdev=net0,bus=pcie.1 -netdev user,id=net0 \ -nographic -vga none -monitor null -no-reboot \ --append "root=/dev/sda console=tty console=hvc0" Reverting this patch together with "dmapool: create/destroy cleanup" fixes the problem. Bisect log is attached for reference. Guenter --- # bad: [8232539f864ca60474e38eb42d451f5c26415856] Add linux-next specific files for 20230225 # good: [c9c3395d5e3dcc6daee66c6908354d47bf98cb0c] Linux 6.2 git bisect start 'HEAD' 'v6.2' # good: [fe3130bc4df0b1303de4321af2bc4dcee5d7db2f] cifs: reuse cifs_match_ipaddr for comparison of dstaddr too git bisect good fe3130bc4df0b1303de4321af2bc4dcee5d7db2f # good: [8138ddac3c324feb92cc30f6d0d3a1bba51345a9] Merge branch 'for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/dtor/input.git git bisect good 8138ddac3c324feb92cc30f6d0d3a1bba51345a9 # bad: [2a15ddbcd09ca3a7843a48832884e37e703eaf83] Merge branch 'master' of git://linuxtv.org/media_tree.git git bisect bad 2a15ddbcd09ca3a7843a48832884e37e703eaf83 # bad: [a7d241d71cf464413307df69177ae2dec8481d37] Merge branch 'next' of git://git.kernel.org/pub/scm/linux/kernel/git/powerpc/linux.git git bisect bad a7d241d71cf464413307df69177ae2dec8481d37 # bad: [446eb7f1f4aec9232d4b10222123a4566a8b1a95] Merge branch 'for-next' of git://git.kernel.org/pub/scm/linux/kernel/git/tmlind/linux-omap.git git bisect bad 446eb7f1f4aec9232d4b10222123a4566a8b1a95 # good: [14c61d2100377dde2f6338395325b4090279d6a7] soc: document merges git bisect good 14c61d2100377dde2f6338395325b4090279d6a7 # bad: [cb26c07e8a8acaecb43228181e1eae68ece8db0e] Merge branch 'for-next' of git://git.kernel.org/pub/scm/linux/kernel/git/masahiroy/linux-kbuild.git git bisect bad cb26c07e8a8acaecb43228181e1eae68ece8db0e # bad: [d37d53a39d853fcc2121770fd3b61f274985d594] Merge branch 'mm-everything' of git://git.kernel.org/pub/scm/linux/kernel/git/akpm/mm git bisect bad d37d53a39d853fcc2121770fd3b61f274985d594 # bad: [708a06c601945c3415240ed0950e37fe27dd8e60] mm/userfaultfd: support WP on multiple VMAs git bisect bad 708a06c601945c3415240ed0950e37fe27dd8e60 # good: [beb78ba6c0dbed73b38d5ed74bf47aa2c65fafa7] dmapool: move debug code to own functions git bisect good beb78ba6c0dbed73b38d5ed74bf47aa2c65fafa7 # bad: [a2cb3f101b06f78258cf0c6813b3a17bd1ec846a] zsmalloc: remove insert_zspage() ->inuse optimization git bisect bad a2cb3f101b06f78258cf0c6813b3a17bd1ec846a # good: [e637ac603aec2b0a73e50fd8031481c6e55bf139] dmapool: don't memset on free twice git bisect good e637ac603aec2b0a73e50fd8031481c6e55bf139 # bad: [8f5073712e32685dfeb4925f13a95c6eb9f10cd8] dmapool: create/destroy cleanup git bisect bad 8f5073712e32685dfeb4925f13a95c6eb9f10cd8 # bad: [28b0a0c64bc658e176368f9270dc8085aa469c63] dmapool: link blocks across pages git bisect bad 28b0a0c64bc658e176368f9270dc8085aa469c63 # first bad commit: [28b0a0c64bc658e176368f9270dc8085aa469c63] dmapool: link blocks across pages
On Sun, Feb 26, 2023 at 04:54:45PM -0800, Guenter Roeck wrote: > With this patch in linux-next, I see a boot failure when trying to boot > a powernv qemu emulation from the SCSI MEGASAS controller. > > Qemu command line is > > qemu-system-ppc64 -M powernv -cpu POWER9 -m 2G \ > -kernel arch/powerpc/boot/zImage.epapr \ > -snapshot \ > -device megasas,id=scsi,bus=pcie.0 -device scsi-hd,bus=scsi.0,drive=d0 \ > -drive file=rootfs-el.ext2,format=raw,if=none,id=d0 \ > -device i82557a,netdev=net0,bus=pcie.1 -netdev user,id=net0 \ > -nographic -vga none -monitor null -no-reboot \ > --append "root=/dev/sda console=tty console=hvc0" > > Reverting this patch together with "dmapool: create/destroy cleanup" > fixes the problem. Thanks for the notice. I was able to recreate, and it does look like this is fixed with my more recent update changing the dma pool block order, and that is still pending out of tree. Would you also be able to verify? The patch is available here: https://lore.kernel.org/linux-mm/Y%2FzmUXrAiNujjoib@kbusch-mbp.dhcp.thefacebook.com/T/#t
On Mon, Feb 27, 2023 at 06:01:48PM -0700, Keith Busch wrote: > On Sun, Feb 26, 2023 at 04:54:45PM -0800, Guenter Roeck wrote: > > With this patch in linux-next, I see a boot failure when trying to boot > > a powernv qemu emulation from the SCSI MEGASAS controller. > > > > Qemu command line is > > > > qemu-system-ppc64 -M powernv -cpu POWER9 -m 2G \ > > -kernel arch/powerpc/boot/zImage.epapr \ > > -snapshot \ > > -device megasas,id=scsi,bus=pcie.0 -device scsi-hd,bus=scsi.0,drive=d0 \ > > -drive file=rootfs-el.ext2,format=raw,if=none,id=d0 \ > > -device i82557a,netdev=net0,bus=pcie.1 -netdev user,id=net0 \ > > -nographic -vga none -monitor null -no-reboot \ > > --append "root=/dev/sda console=tty console=hvc0" > > > > Reverting this patch together with "dmapool: create/destroy cleanup" > > fixes the problem. > > Thanks for the notice. I was able to recreate, and it does look like this is > fixed with my more recent update changing the dma pool block order, and that is > still pending out of tree. Would you also be able to verify? The patch is > available here: > > https://lore.kernel.org/linux-mm/Y%2FzmUXrAiNujjoib@kbusch-mbp.dhcp.thefacebook.com/T/#t Yes, that fixes the problem I have observed. I sent a Tested-by: a minute ago. Guenter
diff --git a/mm/dmapool.c b/mm/dmapool.c index 21e6d362c7264..bb8893b4f4b96 100644 --- a/mm/dmapool.c +++ b/mm/dmapool.c @@ -15,7 +15,7 @@ * represented by the 'struct dma_pool' which keeps a doubly-linked list of * allocated pages. Each page in the page_list is split into blocks of at * least 'size' bytes. Free blocks are tracked in an unsorted singly-linked - * list of free blocks within the page. Used blocks aren't tracked, but we + * list of free blocks across all pages. Used blocks aren't tracked, but we * keep a count of how many are currently allocated from each page. */ @@ -40,9 +40,18 @@ #define DMAPOOL_DEBUG 1 #endif +struct dma_block { + struct dma_block *next_block; + dma_addr_t dma; +}; + struct dma_pool { /* the pool */ struct list_head page_list; spinlock_t lock; + struct dma_block *next_block; + size_t nr_blocks; + size_t nr_active; + size_t nr_pages; struct device *dev; unsigned int size; unsigned int allocation; @@ -55,8 +64,6 @@ struct dma_page { /* cacheable header for 'allocation' bytes */ struct list_head page_list; void *vaddr; dma_addr_t dma; - unsigned int in_use; - unsigned int offset; }; static DEFINE_MUTEX(pools_lock); @@ -64,30 +71,18 @@ static DEFINE_MUTEX(pools_reg_lock); static ssize_t pools_show(struct device *dev, struct device_attribute *attr, char *buf) { - int size; - struct dma_page *page; struct dma_pool *pool; + unsigned size; size = sysfs_emit(buf, "poolinfo - 0.1\n"); mutex_lock(&pools_lock); list_for_each_entry(pool, &dev->dma_pools, pools) { - unsigned pages = 0; - size_t blocks = 0; - - spin_lock_irq(&pool->lock); - list_for_each_entry(page, &pool->page_list, page_list) { - pages++; - blocks += page->in_use; - } - spin_unlock_irq(&pool->lock); - /* per-pool info, no real statistics yet */ - size += sysfs_emit_at(buf, size, "%-16s %4zu %4zu %4u %2u\n", - pool->name, blocks, - (size_t) pages * - (pool->allocation / pool->size), - pool->size, pages); + size += sysfs_emit_at(buf, size, "%-16s %4zu %4zu %4u %2zu\n", + pool->name, pool->nr_active, + pool->nr_blocks, pool->size, + pool->nr_pages); } mutex_unlock(&pools_lock); @@ -97,17 +92,17 @@ static ssize_t pools_show(struct device *dev, struct device_attribute *attr, cha static DEVICE_ATTR_RO(pools); #ifdef DMAPOOL_DEBUG -static void pool_check_block(struct dma_pool *pool, void *retval, - unsigned int offset, gfp_t mem_flags) +static void pool_check_block(struct dma_pool *pool, struct dma_block *block, + gfp_t mem_flags) { + u8 *data = (void *)block; int i; - u8 *data = retval; - /* page->offset is stored in first 4 bytes */ - for (i = sizeof(offset); i < pool->size; i++) { + + for (i = sizeof(struct dma_block); i < pool->size; i++) { if (data[i] == POOL_POISON_FREED) continue; - dev_err(pool->dev, "%s %s, %p (corrupted)\n", - __func__, pool->name, retval); + dev_err(pool->dev, "%s %s, %p (corrupted)\n", __func__, + pool->name, block); /* * Dump the first 4 bytes even if they are not @@ -117,31 +112,46 @@ static void pool_check_block(struct dma_pool *pool, void *retval, data, pool->size, 1); break; } + if (!want_init_on_alloc(mem_flags)) - memset(retval, POOL_POISON_ALLOCATED, pool->size); + memset(block, POOL_POISON_ALLOCATED, pool->size); +} + +static struct dma_page *pool_find_page(struct dma_pool *pool, dma_addr_t dma) +{ + struct dma_page *page; + + list_for_each_entry(page, &pool->page_list, page_list) { + if (dma < page->dma) + continue; + if ((dma - page->dma) < pool->allocation) + return page; + } + return NULL; } -static bool pool_page_err(struct dma_pool *pool, struct dma_page *page, - void *vaddr, dma_addr_t dma) +static bool pool_block_err(struct dma_pool *pool, void *vaddr, dma_addr_t dma) { - unsigned int offset = vaddr - page->vaddr; - unsigned int chain = page->offset; + struct dma_block *block = pool->next_block; + struct dma_page *page; - if ((dma - page->dma) != offset) { - dev_err(pool->dev, "%s %s, %p (bad vaddr)/%pad\n", + page = pool_find_page(pool, dma); + if (!page) { + dev_err(pool->dev, "%s %s, %p/%pad (bad dma)\n", __func__, pool->name, vaddr, &dma); return true; } - while (chain < pool->allocation) { - if (chain != offset) { - chain = *(int *)(page->vaddr + chain); + while (block) { + if (block != vaddr) { + block = block->next_block; continue; } dev_err(pool->dev, "%s %s, dma %pad already free\n", __func__, pool->name, &dma); return true; } + memset(vaddr, POOL_POISON_FREED, pool->size); return false; } @@ -151,14 +161,12 @@ static void pool_init_page(struct dma_pool *pool, struct dma_page *page) memset(page->vaddr, POOL_POISON_FREED, pool->allocation); } #else -static void pool_check_block(struct dma_pool *pool, void *retval, - unsigned int offset, gfp_t mem_flags) - +static void pool_check_block(struct dma_pool *pool, struct dma_block *block, + gfp_t mem_flags) { } -static bool pool_page_err(struct dma_pool *pool, struct dma_page *page, - void *vaddr, dma_addr_t dma) +static bool pool_block_err(struct dma_pool *pool, void *vaddr, dma_addr_t dma) { if (want_init_on_free()) memset(vaddr, 0, pool->size); @@ -170,6 +178,26 @@ static void pool_init_page(struct dma_pool *pool, struct dma_page *page) } #endif +static struct dma_block *pool_block_pop(struct dma_pool *pool) +{ + struct dma_block *block = pool->next_block; + + if (block) { + pool->next_block = block->next_block; + pool->nr_active++; + } + return block; +} + +static void pool_block_push(struct dma_pool *pool, struct dma_block *block, + dma_addr_t dma) +{ + block->dma = dma; + block->next_block = pool->next_block; + pool->next_block = block; +} + + /** * dma_pool_create - Creates a pool of consistent memory blocks, for dma. * @name: name of pool, for diagnostics @@ -210,8 +238,8 @@ struct dma_pool *dma_pool_create(const char *name, struct device *dev, if (size == 0 || size > INT_MAX) return NULL; - else if (size < 4) - size = 4; + if (size < sizeof(struct dma_block)) + size = sizeof(struct dma_block); size = ALIGN(size, align); allocation = max_t(size_t, size, PAGE_SIZE); @@ -223,7 +251,7 @@ struct dma_pool *dma_pool_create(const char *name, struct device *dev, boundary = min(boundary, allocation); - retval = kmalloc(sizeof(*retval), GFP_KERNEL); + retval = kzalloc(sizeof(*retval), GFP_KERNEL); if (!retval) return retval; @@ -236,7 +264,6 @@ struct dma_pool *dma_pool_create(const char *name, struct device *dev, retval->size = size; retval->boundary = boundary; retval->allocation = allocation; - INIT_LIST_HEAD(&retval->pools); /* @@ -273,21 +300,25 @@ EXPORT_SYMBOL(dma_pool_create); static void pool_initialise_page(struct dma_pool *pool, struct dma_page *page) { - unsigned int offset = 0; - unsigned int next_boundary = pool->boundary; + unsigned int next_boundary = pool->boundary, offset = 0; + struct dma_block *block; pool_init_page(pool, page); - page->in_use = 0; - page->offset = 0; - do { - unsigned int next = offset + pool->size; - if (unlikely((next + pool->size) >= next_boundary)) { - next = next_boundary; + while (offset + pool->size <= pool->allocation) { + if (offset + pool->size > next_boundary) { + offset = next_boundary; next_boundary += pool->boundary; + continue; } - *(int *)(page->vaddr + offset) = next; - offset = next; - } while (offset < pool->allocation); + + block = page->vaddr + offset; + pool_block_push(pool, block, page->dma + offset); + offset += pool->size; + pool->nr_blocks++; + } + + list_add(&page->page_list, &pool->page_list); + pool->nr_pages++; } static struct dma_page *pool_alloc_page(struct dma_pool *pool, gfp_t mem_flags) @@ -305,15 +336,9 @@ static struct dma_page *pool_alloc_page(struct dma_pool *pool, gfp_t mem_flags) return NULL; } - pool_initialise_page(pool, page); return page; } -static inline bool is_page_busy(struct dma_page *page) -{ - return page->in_use != 0; -} - /** * dma_pool_destroy - destroys a pool of dma memory blocks. * @pool: dma pool that will be destroyed @@ -325,7 +350,7 @@ static inline bool is_page_busy(struct dma_page *page) void dma_pool_destroy(struct dma_pool *pool) { struct dma_page *page, *tmp; - bool empty = false; + bool empty = false, busy = false; if (unlikely(!pool)) return; @@ -340,13 +365,15 @@ void dma_pool_destroy(struct dma_pool *pool) device_remove_file(pool->dev, &dev_attr_pools); mutex_unlock(&pools_reg_lock); + if (pool->nr_active) { + dev_err(pool->dev, "%s %s busy\n", __func__, pool->name); + busy = true; + } + list_for_each_entry_safe(page, tmp, &pool->page_list, page_list) { - if (!is_page_busy(page)) + if (!busy) dma_free_coherent(pool->dev, pool->allocation, page->vaddr, page->dma); - else - dev_err(pool->dev, "%s %s, %p busy\n", __func__, - pool->name, page->vaddr); list_del(&page->page_list); kfree(page); } @@ -368,58 +395,40 @@ EXPORT_SYMBOL(dma_pool_destroy); void *dma_pool_alloc(struct dma_pool *pool, gfp_t mem_flags, dma_addr_t *handle) { - unsigned long flags; + struct dma_block *block; struct dma_page *page; - unsigned int offset; - void *retval; + unsigned long flags; might_alloc(mem_flags); spin_lock_irqsave(&pool->lock, flags); - list_for_each_entry(page, &pool->page_list, page_list) { - if (page->offset < pool->allocation) - goto ready; - } - - /* pool_alloc_page() might sleep, so temporarily drop &pool->lock */ - spin_unlock_irqrestore(&pool->lock, flags); - - page = pool_alloc_page(pool, mem_flags & (~__GFP_ZERO)); - if (!page) - return NULL; + block = pool_block_pop(pool); + if (!block) { + /* + * pool_alloc_page() might sleep, so temporarily drop + * &pool->lock + */ + spin_unlock_irqrestore(&pool->lock, flags); - spin_lock_irqsave(&pool->lock, flags); + page = pool_alloc_page(pool, mem_flags & (~__GFP_ZERO)); + if (!page) + return NULL; - list_add(&page->page_list, &pool->page_list); - ready: - page->in_use++; - offset = page->offset; - page->offset = *(int *)(page->vaddr + offset); - retval = offset + page->vaddr; - *handle = offset + page->dma; - pool_check_block(pool, retval, offset, mem_flags); + spin_lock_irqsave(&pool->lock, flags); + pool_initialise_page(pool, page); + block = pool_block_pop(pool); + } spin_unlock_irqrestore(&pool->lock, flags); + *handle = block->dma; + pool_check_block(pool, block, mem_flags); if (want_init_on_alloc(mem_flags)) - memset(retval, 0, pool->size); + memset(block, 0, pool->size); - return retval; + return block; } EXPORT_SYMBOL(dma_pool_alloc); -static struct dma_page *pool_find_page(struct dma_pool *pool, dma_addr_t dma) -{ - struct dma_page *page; - - list_for_each_entry(page, &pool->page_list, page_list) { - if (dma < page->dma) - continue; - if ((dma - page->dma) < pool->allocation) - return page; - } - return NULL; -} - /** * dma_pool_free - put block back into dma pool * @pool: the dma pool holding the block @@ -431,31 +440,14 @@ static struct dma_page *pool_find_page(struct dma_pool *pool, dma_addr_t dma) */ void dma_pool_free(struct dma_pool *pool, void *vaddr, dma_addr_t dma) { - struct dma_page *page; + struct dma_block *block = vaddr; unsigned long flags; spin_lock_irqsave(&pool->lock, flags); - page = pool_find_page(pool, dma); - if (!page) { - spin_unlock_irqrestore(&pool->lock, flags); - dev_err(pool->dev, "%s %s, %p/%pad (bad dma)\n", - __func__, pool->name, vaddr, &dma); - return; + if (!pool_block_err(pool, vaddr, dma)) { + pool_block_push(pool, block, dma); + pool->nr_active--; } - - if (pool_page_err(pool, page, vaddr, dma)) { - spin_unlock_irqrestore(&pool->lock, flags); - return; - } - - page->in_use--; - *(int *)vaddr = page->offset; - page->offset = vaddr - page->vaddr; - /* - * Resist a temptation to do - * if (!is_page_busy(page)) pool_free_page(pool, page); - * Better have a few empty pages hang around. - */ spin_unlock_irqrestore(&pool->lock, flags); } EXPORT_SYMBOL(dma_pool_free);