From patchwork Thu Jan 30 04:42:48 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sergey Senozhatsky X-Patchwork-Id: 13954303 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 16257C0218F for ; Thu, 30 Jan 2025 04:45:36 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 95BAD2800C0; Wed, 29 Jan 2025 23:45:35 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 90BC02800B9; Wed, 29 Jan 2025 23:45:35 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 7ACB52800C0; Wed, 29 Jan 2025 23:45:35 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0011.hostedemail.com [216.40.44.11]) by kanga.kvack.org (Postfix) with ESMTP id 593A52800B9 for ; Wed, 29 Jan 2025 23:45:35 -0500 (EST) Received: from smtpin04.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay01.hostedemail.com (Postfix) with ESMTP id D0EF31C7796 for ; Thu, 30 Jan 2025 04:45:34 +0000 (UTC) X-FDA: 83062879788.04.BC66F13 Received: from mail-pl1-f173.google.com (mail-pl1-f173.google.com [209.85.214.173]) by imf17.hostedemail.com (Postfix) with ESMTP id E4C0440003 for ; Thu, 30 Jan 2025 04:45:32 +0000 (UTC) Authentication-Results: imf17.hostedemail.com; dkim=pass header.d=chromium.org header.s=google header.b=Hy+u1avx; spf=pass (imf17.hostedemail.com: domain of senozhatsky@chromium.org designates 209.85.214.173 as permitted sender) smtp.mailfrom=senozhatsky@chromium.org; dmarc=pass (policy=none) header.from=chromium.org ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1738212333; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=tPaya4S3Kfqs3oMtYONdXDrgcp/vC/ArYdL0b17yO2E=; b=DtLbQKBSe/kNRDyBnKixnMyaQ4Qtl7shjXY+gEfemw43WFI24I/8PKfDIkGlUhsn+woa5z PL/8WID7KQQGJV2XUWNQgTPVqLy2b5lQPG8T7MAFR/P6bZQFevhO8x9cSJGps3m3iSeJjt Tux4ArzmManYFcM7WzXMuWesrsMjC2E= ARC-Authentication-Results: i=1; imf17.hostedemail.com; dkim=pass header.d=chromium.org header.s=google header.b=Hy+u1avx; spf=pass (imf17.hostedemail.com: domain of senozhatsky@chromium.org designates 209.85.214.173 as permitted sender) smtp.mailfrom=senozhatsky@chromium.org; dmarc=pass (policy=none) header.from=chromium.org ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1738212333; a=rsa-sha256; cv=none; b=VxpoTjoyLCANh/Cziseejcn3Z2bH0Qfb2Rkd8/Ifyv3sWzCWEDmcmRUxpjLCKzKbPTz9h/ R0uZXPIseEpZ43iRNB4QVLOglSUAgpYb0k5c5rzYjB92Spjil6zl61XlkHm2fM2PiEYHyO gx91Xjm+Dge+h687EJmINIAQ17ezXrA= Received: by mail-pl1-f173.google.com with SMTP id d9443c01a7336-2167141dfa1so5072665ad.1 for ; Wed, 29 Jan 2025 20:45:32 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=chromium.org; s=google; t=1738212332; x=1738817132; darn=kvack.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=tPaya4S3Kfqs3oMtYONdXDrgcp/vC/ArYdL0b17yO2E=; b=Hy+u1avxGF1MS/Ls9YuyEKenCIr0TeOFUUDKEZGSWnM3I8NYCV5IDSVhPpXFALrPDW tSv6WLi3rslgMJZpPQnfb30qptoK8GkxcrLY9bXFZYvkB8PePwbt9zt5EN+B2Efm6x6H pbJHB06KrnvGj13GWcL4FFP5QYzTRg9kWFZns= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1738212332; x=1738817132; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=tPaya4S3Kfqs3oMtYONdXDrgcp/vC/ArYdL0b17yO2E=; b=s17p6SlJsBa+Eq9rOyqcK9pTJ2YuzcL3C1Mjf3+CnQVLOIQgrwA/igx6HU2LCeBkX3 1LtQsuzqHjCQnOrhgolJAxH69qc8XBqL2v+w3sLZEn+bsewgvx5+n8QoP7ZMlH8Qgu8v 37zdU02J8+QVmuDqHWPc8e88lmDK586sLbAfutqRl5XDLPIMY7Vzl5nHD+A1+uFOp74S aXOf55SIejA58fggCqOZFcGgN5MJpnb2yV1WI6SClHHYZMss/QYUfJd1kljUX/Mx/6rR 5AJAO0UTWuqsyiLK+OQNb8+lyqVUK2k5UMaJ1sHnKrgnIXEe1BF4DDf2GE6C3gyxa/ow gPVA== X-Forwarded-Encrypted: i=1; AJvYcCUHOpMTKaOBuk0AAwFf+70C/MkkjNRs6jM2ZPsYDMTFzHNaz3CnpIpxKWuJ0KPBbcySMcmKySu3Bg==@kvack.org X-Gm-Message-State: AOJu0YwgIpf1tL02I53w7KMIAQ5QiOmBY3IgBHwRte/Sj6WSDFZpENIk 1H1J7+qDBl5GnK1AYgpyKggQ9Hu428XMSZoIWAO4NGvtFR+ith2jD5DX0fRvCw== X-Gm-Gg: ASbGncupAImwp5AnQESVKp0STKFciQ4NVDBb7R6RkmziopHgZ00sPcqmC7P/ZS9AjUH hkOANhny1QF5FQfgL+DlMTmMMU+ezK+RR7muR287cQc1w42uL0+86LHskILY/onvRsYIn/xI5pm Kq2RhS2cm/fEvDGxctLMwatEtVA6fFiTn0TtmWkGdWEed18Bz4Wkt85rketiXfIEiCikVNrEsd8 S+6E0H0RDwBdqQSpA/jW/Fqvsw8TIsZJ65UdjVXEZPV8J14UOdvNjzSMcNd10Dlw2F49X9uIm+u eimzWEuUn4N3S4eq+A== X-Google-Smtp-Source: AGHT+IFnBawetG6OISjgDXcbQ2wf8X+tTg83rOrIMUBmE/TOfvysIil5FAzSXvjYS33jjv+nXChEoA== X-Received: by 2002:a17:903:41c3:b0:216:1079:82bb with SMTP id d9443c01a7336-21de195cd1emr27201265ad.19.1738212331708; Wed, 29 Jan 2025 20:45:31 -0800 (PST) Received: from localhost ([2401:fa00:8f:203:d794:9e7a:5186:857e]) by smtp.gmail.com with UTF8SMTPSA id d9443c01a7336-21de31ee557sm4533535ad.1.2025.01.29.20.45.29 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128); Wed, 29 Jan 2025 20:45:31 -0800 (PST) From: Sergey Senozhatsky To: Andrew Morton , Yosry Ahmed Cc: Minchan Kim , Johannes Weiner , Nhat Pham , Uros Bizjak , linux-mm@kvack.org, linux-kernel@vger.kernel.org, Sergey Senozhatsky Subject: [PATCHv2 5/7] zsmalloc: introduce new object mapping API Date: Thu, 30 Jan 2025 13:42:48 +0900 Message-ID: <20250130044455.2642465-6-senozhatsky@chromium.org> X-Mailer: git-send-email 2.48.1.262.g85cc9f2d1e-goog In-Reply-To: <20250130044455.2642465-1-senozhatsky@chromium.org> References: <20250130044455.2642465-1-senozhatsky@chromium.org> MIME-Version: 1.0 X-Rspamd-Queue-Id: E4C0440003 X-Stat-Signature: szw3of6urugf6gipatixkdo653ayj1gg X-Rspamd-Server: rspam08 X-Rspam-User: X-HE-Tag: 1738212332-400844 X-HE-Meta: U2FsdGVkX19CSQEPLXe7iwx6skdjlo6LQMNRWZlChXCAOgZWHEgsvF2XGGq6S+hGpzltUsm16CRzgOj8wmwHcljXv7wCbMWF/p3MFPtyN3uJXTE8zeD2kEb3z/QUbGvMl6RS2ne3d96gt7JrIEtcZowMsvyaqV3f5hAf057R6LQerRGij2HIeCE6MHCLU2i24JyL8SVj+UWLfnYxd5RIIeqpT3/qsCRi/dIji14lUNHyxQBOiUaj+ho03Z96CjdOf6WGH5WnQgq+XzL9UveKtTDyfsEtpSp0Pj/uxEqRKqq5sIygkcvCDv/JYG0aM6H0qaQ6+YtbEdvZqTncZa4WIsKqhOUgBxYwXITmgjZgm2zH9p65RIht/tY0VNyDbwlK5gYeGBS81W3KeHlHFX/asl1RWnPbrDRZG4HEJhxdG/bfEIiuNx6IIhiDDP6csxxq5UlBELoSfOlmgpSm8lZC25NPV2M/k3TQrXYg7cunYTovOerFCl8bsYcfs/FFV/UErAv6xOOgEdwrL2T+FdA1Ma1QIPI7h+9hvMFw9xtEVAGPp9jnNxf3FX9WQaEiGnPtiCcBtEg27N6+UPqasL0B+Jj6WyLK31B+DRaB0WAIARDOeqmPOOL2nQHq5bxi1XsqHonpfxFcB2/GqKq8CFWFi7DWJFk7xk2RBq5WQiX9s2evsIJ6Fw+nEjGWtIhi2Clh/bQqV4Bg0Z9RAgx9GqV8QAH2VkJfVvnU3GZniXlrf2ygUZ67uD09QdILiOQBjmiU2lzCjrLcZsdv54gjYHRtG0xF8S6ADfni944dbcZpREb7GxAbdJFO3KyvRgW9Cwg+N0lOtw8BxajD1q3c3o1RUU9njBvMCpfwtD0nXAbM/o7Nmib/uLDZEGovUQ+6KKFaYj3QvxYSNGzGgpG8GDtIxm/2u5Z4vj37/iA31Hdvsrp50Pg1rkdYpbUpIoG5GaT1IDuehKi3m/0qSjLdMm1 0W7uSRwA cHC10PqNUzQLtrHYiLM2zmT7U+K0dDduKnesr9zEtLY4D+17B4YxXHctifb2ul26Sl+HDHUODr5DvUn6tXiy7ZMECUGfPHNHQ6uRPs7bfJ8sAwuMCJ+nW+E71lATFJDQxdJWRbJoKDnkUmbJsk40R+DjGx1qm5Gtaprcj2IChs6ZK+43U7ozSQf/DUMZZ/NJi1rkdAH0h3ATnOu1aGXkhCFvt4uxvVzc4YkgaGnlGdYqCTf2+ZOnVSVoCwlCZjWEWhYJNU9iZ0lNcUmBKILnpjDzomIcFMJQyCYzzf8kToz00qAmE4X7YIcFMd4kyktqXFqHs X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: Current object mapping API is a little cumbersome. First, it's inconsistent, sometimes it returns with page-faults disabled and sometimes with page-faults enabled. Second, and most importantly, it enforces atomicity restrictions on its users. zs_map_object() has to return a liner object address which is not always possible because some objects span multiple physical (non-contiguous) pages. For such objects zsmalloc uses a per-CPU buffer to which object's data is copied before a pointer to that per-CPU buffer is returned back to the caller. This leads to another, final, issue - extra memcpy(). Since the caller gets a pointer to per-CPU buffer it can memcpy() data only to that buffer, and during zs_unmap_object() zsmalloc will memcpy() from that per-CPU buffer to physical pages that object in question spans across. New API splits functions by access mode: - zs_obj_read_begin(handle, local_copy) Returns a pointer to handle memory. For objects that span two physical pages a local_copy buffer is used to store object's data before the address is returned to the caller. Otherwise the object's page is kmap_local mapped directly. - zs_obj_read_end(handle, buf) Unmaps the page if it was kmap_local mapped by zs_obj_read_begin(). - zs_obj_write(handle, buf, len) Copies len-bytes from compression buffer to handle memory (takes care of objects that span two pages). This does not need any additional (e.g. per-CPU) buffers and writes the data directly to zsmalloc pool pages. The old API will stay around until the remaining users switch to the new one. After that we'll also remove zsmalloc per-CPU buffer and CPU hotplug handling. Signed-off-by: Sergey Senozhatsky Reviewed-by: Yosry Ahmed --- include/linux/zsmalloc.h | 8 +++ mm/zsmalloc.c | 130 +++++++++++++++++++++++++++++++++++++++ 2 files changed, 138 insertions(+) diff --git a/include/linux/zsmalloc.h b/include/linux/zsmalloc.h index a48cd0ffe57d..7d70983cf398 100644 --- a/include/linux/zsmalloc.h +++ b/include/linux/zsmalloc.h @@ -58,4 +58,12 @@ unsigned long zs_compact(struct zs_pool *pool); unsigned int zs_lookup_class_index(struct zs_pool *pool, unsigned int size); void zs_pool_stats(struct zs_pool *pool, struct zs_pool_stats *stats); + +void *zs_obj_read_begin(struct zs_pool *pool, unsigned long handle, + void *local_copy); +void zs_obj_read_end(struct zs_pool *pool, unsigned long handle, + void *handle_mem); +void zs_obj_write(struct zs_pool *pool, unsigned long handle, + void *handle_mem, size_t mem_len); + #endif diff --git a/mm/zsmalloc.c b/mm/zsmalloc.c index d8cc8e2598cc..67c934ed4be9 100644 --- a/mm/zsmalloc.c +++ b/mm/zsmalloc.c @@ -1367,6 +1367,136 @@ void zs_unmap_object(struct zs_pool *pool, unsigned long handle) } EXPORT_SYMBOL_GPL(zs_unmap_object); +void *zs_obj_read_begin(struct zs_pool *pool, unsigned long handle, + void *local_copy) +{ + struct zspage *zspage; + struct zpdesc *zpdesc; + unsigned long obj, off; + unsigned int obj_idx; + struct size_class *class; + void *addr; + + WARN_ON(in_interrupt()); + + /* Guarantee we can get zspage from handle safely */ + pool_read_lock(pool); + obj = handle_to_obj(handle); + obj_to_location(obj, &zpdesc, &obj_idx); + zspage = get_zspage(zpdesc); + + /* Make sure migration doesn't move any pages in this zspage */ + zspage_read_lock(zspage); + pool_read_unlock(pool); + + class = zspage_class(pool, zspage); + off = offset_in_page(class->size * obj_idx); + + if (off + class->size <= PAGE_SIZE) { + /* this object is contained entirely within a page */ + addr = kmap_local_zpdesc(zpdesc); + addr += off; + } else { + size_t sizes[2]; + + /* this object spans two pages */ + sizes[0] = PAGE_SIZE - off; + sizes[1] = class->size - sizes[0]; + addr = local_copy; + + memcpy_from_page(addr, zpdesc_page(zpdesc), + off, sizes[0]); + zpdesc = get_next_zpdesc(zpdesc); + memcpy_from_page(addr + sizes[0], + zpdesc_page(zpdesc), + 0, sizes[1]); + } + + if (!ZsHugePage(zspage)) + addr += ZS_HANDLE_SIZE; + + return addr; +} +EXPORT_SYMBOL_GPL(zs_obj_read_begin); + +void zs_obj_read_end(struct zs_pool *pool, unsigned long handle, + void *handle_mem) +{ + struct zspage *zspage; + struct zpdesc *zpdesc; + unsigned long obj, off; + unsigned int obj_idx; + struct size_class *class; + + obj = handle_to_obj(handle); + obj_to_location(obj, &zpdesc, &obj_idx); + zspage = get_zspage(zpdesc); + class = zspage_class(pool, zspage); + off = offset_in_page(class->size * obj_idx); + + if (off + class->size <= PAGE_SIZE) { + if (!ZsHugePage(zspage)) + off += ZS_HANDLE_SIZE; + handle_mem -= off; + kunmap_local(handle_mem); + } + + zspage_read_unlock(zspage); +} +EXPORT_SYMBOL_GPL(zs_obj_read_end); + +void zs_obj_write(struct zs_pool *pool, unsigned long handle, + void *handle_mem, size_t mem_len) +{ + struct zspage *zspage; + struct zpdesc *zpdesc; + unsigned long obj, off; + unsigned int obj_idx; + struct size_class *class; + + WARN_ON(in_interrupt()); + + /* Guarantee we can get zspage from handle safely */ + pool_read_lock(pool); + obj = handle_to_obj(handle); + obj_to_location(obj, &zpdesc, &obj_idx); + zspage = get_zspage(zpdesc); + + /* Make sure migration doesn't move any pages in this zspage */ + zspage_read_lock(zspage); + pool_read_unlock(pool); + + class = zspage_class(pool, zspage); + off = offset_in_page(class->size * obj_idx); + + if (off + class->size <= PAGE_SIZE) { + /* this object is contained entirely within a page */ + void *dst = kmap_local_zpdesc(zpdesc); + + if (!ZsHugePage(zspage)) + off += ZS_HANDLE_SIZE; + memcpy(dst + off, handle_mem, mem_len); + kunmap_local(dst); + } else { + /* this object spans two pages */ + size_t sizes[2]; + + off += ZS_HANDLE_SIZE; + + sizes[0] = PAGE_SIZE - off; + sizes[1] = mem_len - sizes[0]; + + memcpy_to_page(zpdesc_page(zpdesc), off, + handle_mem, sizes[0]); + zpdesc = get_next_zpdesc(zpdesc); + memcpy_to_page(zpdesc_page(zpdesc), 0, + handle_mem + sizes[0], sizes[1]); + } + + zspage_read_unlock(zspage); +} +EXPORT_SYMBOL_GPL(zs_obj_write); + /** * zs_huge_class_size() - Returns the size (in bytes) of the first huge * zsmalloc &size_class.