From patchwork Thu Mar 5 11:59:52 2015 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Boaz Harrosh X-Patchwork-Id: 5945191 Return-Path: X-Original-To: patchwork-linux-nvdimm@patchwork.kernel.org Delivered-To: patchwork-parsemail@patchwork1.web.kernel.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.136]) by patchwork1.web.kernel.org (Postfix) with ESMTP id 3F6129F318 for ; Thu, 5 Mar 2015 11:59:59 +0000 (UTC) Received: from mail.kernel.org (localhost [127.0.0.1]) by mail.kernel.org (Postfix) with ESMTP id 580672038C for ; Thu, 5 Mar 2015 11:59:58 +0000 (UTC) Received: from ml01.01.org (ml01.01.org [198.145.21.10]) (using TLSv1.2 with cipher DHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 74C1720386 for ; Thu, 5 Mar 2015 11:59:57 +0000 (UTC) Received: from ml01.vlan14.01.org (localhost [IPv6:::1]) by ml01.01.org (Postfix) with ESMTP id 6A6838118F; Thu, 5 Mar 2015 03:59:57 -0800 (PST) X-Original-To: linux-nvdimm@lists.01.org Delivered-To: linux-nvdimm@lists.01.org Received: from mail-wg0-f52.google.com (mail-wg0-f52.google.com [74.125.82.52]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by ml01.01.org (Postfix) with ESMTPS id 1F7818118F for ; Thu, 5 Mar 2015 03:59:56 -0800 (PST) Received: by wggx12 with SMTP id x12so52923500wgg.6 for ; Thu, 05 Mar 2015 03:59:54 -0800 (PST) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:message-id:date:from:user-agent:mime-version:to :cc:subject:references:in-reply-to:content-type :content-transfer-encoding; bh=HNXu4rx5uRViJsBPvtztvK/Vysfncf5Y26S/dOKvSTk=; b=bjFCrp0IcTrAwdbf/3dazPxRjf9fj/uwkz6cXXtrphJPW9EBQ2zerWyvFhedsENQR8 nqM99MtPdRXuNHl3NQ3YQH/rDd0bhBPAK3g86bUucsoUIpnui9zMVNGfUvYFQRXukg7q 1oziM2GoOSqdYKBTUf5LfEOIZPmpgmYUA+jGK3lZB8yty6Lm5C8wkIVj+XmF0dqE966g wcoSBqYdE/xCszvmCTukWa9bT3HshJLFNcvE4LFMdg7+4wlg7bnx6dZjVAwJ6cC83xFV 8Pfz8gqcsxE0gcMJb4jvKfAb9FDFER4cXwOka5zQlcyeucX1iu4Rt93glH2UJwNR1/zu kW8w== X-Gm-Message-State: ALoCoQkk9WMyI+2zKbkWZvhw4vzWU7/kchL9xtkjwb0bCTLW8OVu3DR6F93DgTzU8I3lyQL2WEMZ X-Received: by 10.180.96.168 with SMTP id dt8mr66254386wib.82.1425556794466; Thu, 05 Mar 2015 03:59:54 -0800 (PST) Received: from [10.0.0.5] ([207.232.55.62]) by mx.google.com with ESMTPSA id w8sm10213732wja.4.2015.03.05.03.59.52 (version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Thu, 05 Mar 2015 03:59:53 -0800 (PST) Message-ID: <54F84538.8080500@plexistor.com> Date: Thu, 05 Mar 2015 13:59:52 +0200 From: Boaz Harrosh User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:31.0) Gecko/20100101 Thunderbird/31.5.0 MIME-Version: 1.0 To: Ingo Molnar , x86@kernel.org, linux-kernel , "Roger C. Pao" , Dan Williams , Thomas Gleixner , linux-nvdimm , "H. Peter Anvin" , Matthew Wilcox , Andy Lutomirski , Christoph Hellwig References: <54F82CE0.4040502@plexistor.com> <54F830D4.7030205@plexistor.com> In-Reply-To: <54F830D4.7030205@plexistor.com> Subject: [Linux-nvdimm] [PATCH 7/8] pmem: Add support for page structs X-BeenThere: linux-nvdimm@lists.01.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: "Linux-nvdimm developer list." List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: linux-nvdimm-bounces@lists.01.org Sender: "Linux-nvdimm" X-Spam-Status: No, score=-1.9 required=5.0 tests=BAYES_00, T_RP_MATCHES_RCVD, UNPARSEABLE_RELAY autolearn=unavailable version=3.3.1 X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on mail.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP One of the current shortcomings of the NVDIMM/PMEM support is that this memory does not have a page-struct(s) associated with its memory and therefor cannot be passed to a block-device or network or DMAed in any way through another device in the system. The use of add_persistent_memory() fixes all this. After this patch an FS can do: bdev_direct_access(,&pfn,); page = pfn_to_page(pfn); And use that page for a lock_page(), set_page_dirty(), and/or anything else one might do with a page *. (Note that with brd one can already do this) [pmem-pages-ref-count] pmem will serve it's pages with ref==0. Once an FS does an blkdev_get_XXX(,FMODE_EXCL,), that memory is own by the FS. The FS needs to manage its allocation, just as it already does for its disk blocks. The fs should set page->count = 2, before submission to any Kernel subsystem so when it returns it will never be released to the Kernel's page-allocators. (page_freeze) Signed-off-by: Boaz Harrosh --- drivers/block/Kconfig | 13 +++++++++++++ drivers/block/pmem.c | 20 ++++++++++++++++++++ 2 files changed, 33 insertions(+) diff --git a/drivers/block/Kconfig b/drivers/block/Kconfig index 1530c2a..635fa6a 100644 --- a/drivers/block/Kconfig +++ b/drivers/block/Kconfig @@ -422,6 +422,19 @@ config BLK_DEV_PMEM Most normal users won't need this functionality, and can thus say N here. +config BLK_DEV_PMEM_USE_PAGES + bool "Enable use of page struct pages with pmem" + depends on BLK_DEV_PMEM + depends on PERSISTENT_MEMORY_DEPENDENCY + select DRIVER_NEEDS_PERSISTENT_MEMORY + default y + help + If a user of PMEM device needs "struct page" associated + with its memory, so this memory can be sent to other + block devices, or sent on the network, or be DMA transferred + to other devices in the system, then you must say "Yes" here. + If unsure leave as Yes. + config CDROM_PKTCDVD tristate "Packet writing on CD/DVD media" depends on !UML diff --git a/drivers/block/pmem.c b/drivers/block/pmem.c index f0f0ba0..d0c80f4 100644 --- a/drivers/block/pmem.c +++ b/drivers/block/pmem.c @@ -20,6 +20,7 @@ #include #include #include +#include #include #include #include @@ -141,6 +142,24 @@ MODULE_PARM_DESC(map, static LIST_HEAD(pmem_devices); +#ifdef CONFIG_BLK_DEV_PMEM_USE_PAGES +/* pmem->phys_addr and pmem->size need to be set. + * Will then set pmem->virt_addr if successful. + */ +int pmem_mapmem(struct pmem_device *pmem) +{ + return add_persistent_memory(pmem->phys_addr, pmem->size, + &pmem->virt_addr); +} + +static void pmem_unmapmem(struct pmem_device *pmem) +{ + remove_persistent_memory(pmem->phys_addr, pmem->size); +} + +#define PMEM_ALIGNMEM (1UL << SECTION_SIZE_BITS) +#else /* !CONFIG_BLK_DEV_PMEM_USE_PAGES */ + /* pmem->phys_addr and pmem->size need to be set. * Will then set virt_addr if successful. */ @@ -180,6 +199,7 @@ void pmem_unmapmem(struct pmem_device *pmem) } #define PMEM_ALIGNMEM PAGE_SIZE +#endif /* ! CONFIG_BLK_DEV_PMEM_USE_PAGES */ static struct pmem_device *pmem_alloc(phys_addr_t phys_addr, size_t disk_size, int i)