From patchwork Thu Feb 6 18:50:55 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Frank van der Linden X-Patchwork-Id: 13963562 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id D0143C02199 for ; Thu, 6 Feb 2025 18:51:57 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 3EB3B28000E; Thu, 6 Feb 2025 13:51:44 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 34C6A280002; Thu, 6 Feb 2025 13:51:44 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 066B828000E; Thu, 6 Feb 2025 13:51:44 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0016.hostedemail.com [216.40.44.16]) by kanga.kvack.org (Postfix) with ESMTP id D7B65280002 for ; Thu, 6 Feb 2025 13:51:43 -0500 (EST) Received: from smtpin03.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay01.hostedemail.com (Postfix) with ESMTP id A01C21C90A9 for ; Thu, 6 Feb 2025 18:51:43 +0000 (UTC) X-FDA: 83090413686.03.A5412F7 Received: from mail-pl1-f201.google.com (mail-pl1-f201.google.com [209.85.214.201]) by imf05.hostedemail.com (Postfix) with ESMTP id D4A17100012 for ; Thu, 6 Feb 2025 18:51:41 +0000 (UTC) Authentication-Results: imf05.hostedemail.com; dkim=pass header.d=google.com header.s=20230601 header.b=GtLBRzsg; spf=pass (imf05.hostedemail.com: domain of 3vASlZwQKCJ0CSAIDLLDIB.9LJIFKRU-JJHS79H.LOD@flex--fvdl.bounces.google.com designates 209.85.214.201 as permitted sender) smtp.mailfrom=3vASlZwQKCJ0CSAIDLLDIB.9LJIFKRU-JJHS79H.LOD@flex--fvdl.bounces.google.com; dmarc=pass (policy=reject) header.from=google.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1738867901; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=nFoWWRyCHF1Iin3pK7rG6BcJvDoZ6S/iIkIiqdRP1Hc=; b=bdi5qw3Y2a0kOm9Q15DQV4nMiJIk2sI9DHXuBsBLN/EghMr30JpteLVMJa8/j0wVWYGsP7 fkyrJ9BHfEbJeBtW+OZGDWuo//ic3b1OOf4O2GguBFRIKdgkpTgnFfAmKjWhUydJfpGkWh qyNWkQIqXQsiIsjgdo9AhjyB+vne57I= ARC-Authentication-Results: i=1; imf05.hostedemail.com; dkim=pass header.d=google.com header.s=20230601 header.b=GtLBRzsg; spf=pass (imf05.hostedemail.com: domain of 3vASlZwQKCJ0CSAIDLLDIB.9LJIFKRU-JJHS79H.LOD@flex--fvdl.bounces.google.com designates 209.85.214.201 as permitted sender) smtp.mailfrom=3vASlZwQKCJ0CSAIDLLDIB.9LJIFKRU-JJHS79H.LOD@flex--fvdl.bounces.google.com; dmarc=pass (policy=reject) header.from=google.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1738867901; a=rsa-sha256; cv=none; b=rDprWo4JYjbFbFydhORdgjYYz4lqoIP+obYzcEZdNGkzo75CMdGdC/BCc3Y91Wy09ZQd12 bI/iSVwBkPSh8D58Ye8TdVeu7GRxqrFLnmPOETcL5EVMUiZtCfeE9OfP62+1qbB8Q3Xnek eSH4FggNO0N3nZ/DreTgr8GFEYf1uis= Received: by mail-pl1-f201.google.com with SMTP id d9443c01a7336-21661949f23so41401955ad.3 for ; Thu, 06 Feb 2025 10:51:41 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1738867901; x=1739472701; darn=kvack.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=nFoWWRyCHF1Iin3pK7rG6BcJvDoZ6S/iIkIiqdRP1Hc=; b=GtLBRzsgV0IvGL6uAX9sUVRHAy2kIA4rjewMSVKsPGUADTWEb0wfEwdCkdUB6HGKi+ UbMsaM3Wqrc+pmLjRZfAc1bj0g5afozw22EVJBVlMCVpTxh1d9w9fpD964QsKrA8Z/MV 5QLRYkfxs6qgCzCYwNT8MVsNE8p7MTgVWc+2LtNLzT8h7s+3FGgqHS63hwUSQqLgAa2S sBtdlSLGSx3sWUZjw7dhyp+mbpWypqo3nVhBYal/O1IoFrNT+kqbhz7Jz9uDo72zSFGq Tfoj8hunnunrQW6t0s4FCaWlyszPMCM56Kfa1Lo3hJ8o9ytBzYTjO1ck2hc2DlCEPVUg B6YA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1738867901; x=1739472701; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=nFoWWRyCHF1Iin3pK7rG6BcJvDoZ6S/iIkIiqdRP1Hc=; b=njSz3dECn3PJjo+rDaMPdn8pZUpSb8NuVx90v1kXCHD3R2E6+nzoI5MXvJqLgqxK40 OzCkY5Y7/SdXHIDYnKc8LZmRyJz0n9Sf3eaDkJfO2aYKgGxSmIW2bEk1SjANz6VfvtIu WYx9TDRfXTOyTqB79p3lNK+OhnSvyp9EPh28vmW4tBSQPmRq+Q+0ddNtTnH+308DlJa3 I9I/lKEGpw6E4uwBIe3CkL59IW1g7OPPo7SV/MrTFzJ4LYuOnqCECoI3uMb+PBfjEDNb cyKBddQEauqYoqePN7oAMyPsOXXQa3OiVgP8nesS2dhlLOXtTIahYIfXJkcGLdZMTdqw hhoQ== X-Forwarded-Encrypted: i=1; AJvYcCUReb+TOzHjzeZ8gQWnTI44M3FzB6Van/YJ757I8P07h6ik6UP12cw3Ai+ATzR2jI7vRGphkKgY/w==@kvack.org X-Gm-Message-State: AOJu0YwS/Uh/zTapmyZ163qK9UqPqPIiDQjpp541VG5dpw7PfQ7jwsnk vILYEzFarwatFJz36VV/bOPy3isaESqKKXwRCPZOQEWHYDxfp68kAc1MXF6crskBHWtBgA== X-Google-Smtp-Source: AGHT+IEGjRoSpQn8/xkl3lhb6Wk2y9VhZAipET65ydINQje9MB9/NNORs9N8PFdtkfHgKJaqcoUjp9yu X-Received: from pfvx11.prod.google.com ([2002:a05:6a00:270b:b0:725:dec7:dd47]) (user=fvdl job=prod-delivery.src-stubby-dispatcher) by 2002:a05:6a20:6f91:b0:1dc:2365:4d7d with SMTP id adf61e73a8af0-1ee03a4744fmr811464637.16.1738867900833; Thu, 06 Feb 2025 10:51:40 -0800 (PST) Date: Thu, 6 Feb 2025 18:50:55 +0000 In-Reply-To: <20250206185109.1210657-1-fvdl@google.com> Mime-Version: 1.0 References: <20250206185109.1210657-1-fvdl@google.com> X-Mailer: git-send-email 2.48.1.502.g6dc24dfdaf-goog Message-ID: <20250206185109.1210657-16-fvdl@google.com> Subject: [PATCH v3 15/28] mm/sparse: add vmemmap_*_hvo functions From: Frank van der Linden To: akpm@linux-foundation.org, muchun.song@linux.dev, linux-mm@kvack.org, linux-kernel@vger.kernel.org Cc: yuzhao@google.com, usamaarif642@gmail.com, joao.m.martins@oracle.com, roman.gushchin@linux.dev, Frank van der Linden X-Rspam-User: X-Rspamd-Server: rspam09 X-Rspamd-Queue-Id: D4A17100012 X-Stat-Signature: 7hce67pmqqxnup4bondmranmt64afi6b X-HE-Tag: 1738867901-288276 X-HE-Meta: U2FsdGVkX1/7DxhXC6EOpfU8j5AZPVYofCcCCV4UVlTKcg6qLGAJ+jXOg/rp6f/UV4bGpXgRAmc+vmL24xkkL51CUG836bjh219lWaJLdVxav/RGyvJWeTWriQJ1oRJ8HvVm9g89wWKVLwU4eZ3akhxVlpzCBHwg5dSBATDYUuHNaDcpJVcEDsE30Xx1hR4z/u5bSAKWWtLA2OOrhg+DBU2LpAGvbzrL0DleMcsdg7UNdvgYr8sTshvJRiQc8Ku7Nh0w1VVjkZxsmuHcYiSutvAhjwJQUp0jBL/EHAhPl1FzCvzQx53Ptg3Onxm8igdm6TAlilSA1rG2Urrd0hynF4HerpOqSPrMmDyoci5ZZAvWIL5OOJ5Vdfl1SYJJYxO+VsV3uREHwjE2el42k2e2ayP1vmq6VMhPQeOm6fn3m9eB/hyw2UQKDNNX3/oBZtg4VmEaR/+Bx8vudLMH089w/i5c0VJHjGWpVYeeTDpprFbzk4e3/WI9cZVaqD7gwkpSHDmq6CrCOdga1xLhrrVLmJb70F/hb6F0Key1aEU57tRhASYKYM8azHYYYfFPdCImK4uzQ/pYIJf/GVXQ/3G6S2bwpr5ZQ0485oLXXxJAyREVy5IhI7YIxgKA0qGrI2zC5QkFmcgtb4PdtGyIAal1xEUdhMlzhFDB5em7ya40zGj1mxfOLkIb/LopKGQnRlCRXd7QV7PRDAxWdRXd1t/GkI/g+FhBflVumUjhWn53n5QaOaLU08QOg5KBy0qP6uGwDM5jiH6wrr6iyrJ1d+jSPQySgMf37u8L8gR5xp6NTOUEmY9+8xkYYDLbXa4f06fc4UZEmEAtCmBGffVTgzm3bTykc/yjkn4sUl8EYocEDjwxmMpx9RDQlSZfovUv4cn0Wuj0cPmJrtMRNe2hJWVjjpTejiapAL+theZg+YTIIgakLGL8EY65u0H34jMqSnZEiiMVXO6kZzX6WCaj0Ik W1OaL4aJ q9Wwk2BFAoG5sU75QLeRequcghhGwA6iL+VFQB3Io1K/iMAE6eVEs4qOPYBie54p5ROuEdfssLWVwjl5wxz+9DgdEKhKKVUaVpyS2jhxVN1ko3ftKAKcM0M4D98HdGDpjh+64fGfJmKUXL8ymZUMVGKJAc6KPD+TRb4v4UPmr2LbyIP0MUHmD2A/FxlRnA92o75kf+LYxquB7aDtBFkG2jh8kYiEXh6zMWHUnWW9boONcLMJwa0kDVJ+MJdwXY+f+nWm6V8v+Sy71uQ0mUQlCzGJu/18V3t5QW1FhYqF/PRcgWbgtnln+aqmB06kIcOYR1woY0VsxrB+R0dBfKqUd7JZeeGUkju7Wptalz+9O6/n1LWjLyw9jY2MUnNsdqibd45wuN7+JPuiwalLvmr8nRcBEQBcWtiCkXtX6Qs/UFMEYudJNPIaF1aE/VAqb0ebEPWD7aDeVpT1dZGC4cfNX2Qqfmu/Xb0B6YcafnX/12Eh6Dby7ZlPZdZQQkMKeDm2Q+nZbFQSmHrzTPpqdDXGySDrF4+epkchloRC1wBfF11JFj8TLCcVyFsahUZrtaGJhOJYkfSdqYGjSK2E= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: Add a few functions to enable early HVO: vmemmap_populate_hvo vmemmap_undo_hvo vmemmap_wrprotect_hvo The populate and undo functions are expected to be used in early init, from the sparse_init_nid_early() function. The wrprotect function is to be used, potentially, later. To implement these functions, mostly re-use the existing compound pages vmemmap logic used by DAX. vmemmap_populate_address has its argument changed a bit in this commit: the page structure passed in to be reused in the mapping is replaced by a PFN and a flag. The flag indicates whether an extra ref should be taken on the vmemmap page containing the head page structure. Taking the ref is appropriate to for DAX / ZONE_DEVICE, but not for HugeTLB HVO. The HugeTLB vmemmap optimization maps tail page structure pages read-only. The vmemmap_wrprotect_hvo function that does this is implemented separately, because it cannot be guaranteed that reserved page structures will not be write accessed during memory initialization. Even with CONFIG_DEFERRED_STRUCT_PAGE_INIT, they might still be written to (if they are at the bottom of a zone). So, vmemmap_populate_hvo leaves the tail page structure pages RW initially, and then later during initialization, after memmap init is fully done, vmemmap_wrprotect_hvo must be called to finish the job. Subsequent commits will use these functions for early HugeTLB HVO. Signed-off-by: Frank van der Linden --- include/linux/mm.h | 9 ++- mm/sparse-vmemmap.c | 141 +++++++++++++++++++++++++++++++++++++++----- 2 files changed, 135 insertions(+), 15 deletions(-) diff --git a/include/linux/mm.h b/include/linux/mm.h index df83653ed6e3..0463c062fd7a 100644 --- a/include/linux/mm.h +++ b/include/linux/mm.h @@ -3837,7 +3837,8 @@ p4d_t *vmemmap_p4d_populate(pgd_t *pgd, unsigned long addr, int node); pud_t *vmemmap_pud_populate(p4d_t *p4d, unsigned long addr, int node); pmd_t *vmemmap_pmd_populate(pud_t *pud, unsigned long addr, int node); pte_t *vmemmap_pte_populate(pmd_t *pmd, unsigned long addr, int node, - struct vmem_altmap *altmap, struct page *reuse); + struct vmem_altmap *altmap, unsigned long ptpfn, + unsigned long flags); void *vmemmap_alloc_block(unsigned long size, int node); struct vmem_altmap; void *vmemmap_alloc_block_buf(unsigned long size, int node, @@ -3853,6 +3854,12 @@ int vmemmap_populate_hugepages(unsigned long start, unsigned long end, int node, struct vmem_altmap *altmap); int vmemmap_populate(unsigned long start, unsigned long end, int node, struct vmem_altmap *altmap); +int vmemmap_populate_hvo(unsigned long start, unsigned long end, int node, + unsigned long headsize); +int vmemmap_undo_hvo(unsigned long start, unsigned long end, int node, + unsigned long headsize); +void vmemmap_wrprotect_hvo(unsigned long start, unsigned long end, int node, + unsigned long headsize); void vmemmap_populate_print_last(void); #ifdef CONFIG_MEMORY_HOTPLUG void vmemmap_free(unsigned long start, unsigned long end, diff --git a/mm/sparse-vmemmap.c b/mm/sparse-vmemmap.c index 8751c46c35e4..8cc848c4b17c 100644 --- a/mm/sparse-vmemmap.c +++ b/mm/sparse-vmemmap.c @@ -30,6 +30,13 @@ #include #include +#include + +/* + * Flags for vmemmap_populate_range and friends. + */ +/* Get a ref on the head page struct page, for ZONE_DEVICE compound pages */ +#define VMEMMAP_POPULATE_PAGEREF 0x0001 #include "internal.h" @@ -144,17 +151,18 @@ void __meminit vmemmap_verify(pte_t *pte, int node, pte_t * __meminit vmemmap_pte_populate(pmd_t *pmd, unsigned long addr, int node, struct vmem_altmap *altmap, - struct page *reuse) + unsigned long ptpfn, unsigned long flags) { pte_t *pte = pte_offset_kernel(pmd, addr); if (pte_none(ptep_get(pte))) { pte_t entry; void *p; - if (!reuse) { + if (ptpfn == (unsigned long)-1) { p = vmemmap_alloc_block_buf(PAGE_SIZE, node, altmap); if (!p) return NULL; + ptpfn = PHYS_PFN(__pa(p)); } else { /* * When a PTE/PMD entry is freed from the init_mm @@ -165,10 +173,10 @@ pte_t * __meminit vmemmap_pte_populate(pmd_t *pmd, unsigned long addr, int node, * and through vmemmap_populate_compound_pages() when * slab is available. */ - get_page(reuse); - p = page_to_virt(reuse); + if (flags & VMEMMAP_POPULATE_PAGEREF) + get_page(pfn_to_page(ptpfn)); } - entry = pfn_pte(__pa(p) >> PAGE_SHIFT, PAGE_KERNEL); + entry = pfn_pte(ptpfn, PAGE_KERNEL); set_pte_at(&init_mm, addr, pte, entry); } return pte; @@ -238,7 +246,8 @@ pgd_t * __meminit vmemmap_pgd_populate(unsigned long addr, int node) static pte_t * __meminit vmemmap_populate_address(unsigned long addr, int node, struct vmem_altmap *altmap, - struct page *reuse) + unsigned long ptpfn, + unsigned long flags) { pgd_t *pgd; p4d_t *p4d; @@ -258,7 +267,7 @@ static pte_t * __meminit vmemmap_populate_address(unsigned long addr, int node, pmd = vmemmap_pmd_populate(pud, addr, node); if (!pmd) return NULL; - pte = vmemmap_pte_populate(pmd, addr, node, altmap, reuse); + pte = vmemmap_pte_populate(pmd, addr, node, altmap, ptpfn, flags); if (!pte) return NULL; vmemmap_verify(pte, node, addr, addr + PAGE_SIZE); @@ -269,13 +278,15 @@ static pte_t * __meminit vmemmap_populate_address(unsigned long addr, int node, static int __meminit vmemmap_populate_range(unsigned long start, unsigned long end, int node, struct vmem_altmap *altmap, - struct page *reuse) + unsigned long ptpfn, + unsigned long flags) { unsigned long addr = start; pte_t *pte; for (; addr < end; addr += PAGE_SIZE) { - pte = vmemmap_populate_address(addr, node, altmap, reuse); + pte = vmemmap_populate_address(addr, node, altmap, + ptpfn, flags); if (!pte) return -ENOMEM; } @@ -286,7 +297,107 @@ static int __meminit vmemmap_populate_range(unsigned long start, int __meminit vmemmap_populate_basepages(unsigned long start, unsigned long end, int node, struct vmem_altmap *altmap) { - return vmemmap_populate_range(start, end, node, altmap, NULL); + return vmemmap_populate_range(start, end, node, altmap, -1, 0); +} + +/* + * Undo populate_hvo, and replace it with a normal base page mapping. + * Used in memory init in case a HVO mapping needs to be undone. + * + * This can happen when it is discovered that a memblock allocated + * hugetlb page spans multiple zones, which can only be verified + * after zones have been initialized. + * + * We know that: + * 1) The first @headsize / PAGE_SIZE vmemmap pages were individually + * allocated through memblock, and mapped. + * + * 2) The rest of the vmemmap pages are mirrors of the last head page. + */ +int __meminit vmemmap_undo_hvo(unsigned long addr, unsigned long end, + int node, unsigned long headsize) +{ + unsigned long maddr, pfn; + pte_t *pte; + int headpages; + + /* + * Should only be called early in boot, so nothing will + * be accessing these page structures. + */ + WARN_ON(!early_boot_irqs_disabled); + + headpages = headsize >> PAGE_SHIFT; + + /* + * Clear mirrored mappings for tail page structs. + */ + for (maddr = addr + headsize; maddr < end; maddr += PAGE_SIZE) { + pte = virt_to_kpte(maddr); + pte_clear(&init_mm, maddr, pte); + } + + /* + * Clear and free mappings for head page and first tail page + * structs. + */ + for (maddr = addr; headpages-- > 0; maddr += PAGE_SIZE) { + pte = virt_to_kpte(maddr); + pfn = pte_pfn(ptep_get(pte)); + pte_clear(&init_mm, maddr, pte); + memblock_phys_free(PFN_PHYS(pfn), PAGE_SIZE); + } + + flush_tlb_kernel_range(addr, end); + + return vmemmap_populate(addr, end, node, NULL); +} + +/* + * Write protect the mirrored tail page structs for HVO. This will be + * called from the hugetlb code when gathering and initializing the + * memblock allocated gigantic pages. The write protect can't be + * done earlier, since it can't be guaranteed that the reserved + * page structures will not be written to during initialization, + * even if CONFIG_DEFERRED_STRUCT_PAGE_INIT is enabled. + * + * The PTEs are known to exist, and nothing else should be touching + * these pages. The caller is responsible for any TLB flushing. + */ +void vmemmap_wrprotect_hvo(unsigned long addr, unsigned long end, + int node, unsigned long headsize) +{ + unsigned long maddr; + pte_t *pte; + + for (maddr = addr + headsize; maddr < end; maddr += PAGE_SIZE) { + pte = virt_to_kpte(maddr); + ptep_set_wrprotect(&init_mm, maddr, pte); + } +} + +/* + * Populate vmemmap pages HVO-style. The first page contains the head + * page and needed tail pages, the other ones are mirrors of the first + * page. + */ +int __meminit vmemmap_populate_hvo(unsigned long addr, unsigned long end, + int node, unsigned long headsize) +{ + pte_t *pte; + unsigned long maddr; + + for (maddr = addr; maddr < addr + headsize; maddr += PAGE_SIZE) { + pte = vmemmap_populate_address(maddr, node, NULL, -1, 0); + if (!pte) + return -ENOMEM; + } + + /* + * Reuse the last page struct page mapped above for the rest. + */ + return vmemmap_populate_range(maddr, end, node, NULL, + pte_pfn(ptep_get(pte)), 0); } void __weak __meminit vmemmap_set_pmd(pmd_t *pmd, void *p, int node, @@ -409,7 +520,8 @@ static int __meminit vmemmap_populate_compound_pages(unsigned long start_pfn, * with just tail struct pages. */ return vmemmap_populate_range(start, end, node, NULL, - pte_page(ptep_get(pte))); + pte_pfn(ptep_get(pte)), + VMEMMAP_POPULATE_PAGEREF); } size = min(end - start, pgmap_vmemmap_nr(pgmap) * sizeof(struct page)); @@ -417,13 +529,13 @@ static int __meminit vmemmap_populate_compound_pages(unsigned long start_pfn, unsigned long next, last = addr + size; /* Populate the head page vmemmap page */ - pte = vmemmap_populate_address(addr, node, NULL, NULL); + pte = vmemmap_populate_address(addr, node, NULL, -1, 0); if (!pte) return -ENOMEM; /* Populate the tail pages vmemmap page */ next = addr + PAGE_SIZE; - pte = vmemmap_populate_address(next, node, NULL, NULL); + pte = vmemmap_populate_address(next, node, NULL, -1, 0); if (!pte) return -ENOMEM; @@ -433,7 +545,8 @@ static int __meminit vmemmap_populate_compound_pages(unsigned long start_pfn, */ next += PAGE_SIZE; rc = vmemmap_populate_range(next, last, node, NULL, - pte_page(ptep_get(pte))); + pte_pfn(ptep_get(pte)), + VMEMMAP_POPULATE_PAGEREF); if (rc) return -ENOMEM; }