From patchwork Wed Jan 21 14:55:20 2015 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Milosz Tanski X-Patchwork-Id: 5677501 Return-Path: X-Original-To: patchwork-linux-fsdevel@patchwork.kernel.org Delivered-To: patchwork-parsemail@patchwork1.web.kernel.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.136]) by patchwork1.web.kernel.org (Postfix) with ESMTP id 4F21F9F4DC for ; Wed, 21 Jan 2015 14:55:32 +0000 (UTC) Received: from mail.kernel.org (localhost [127.0.0.1]) by mail.kernel.org (Postfix) with ESMTP id 43177202BE for ; Wed, 21 Jan 2015 14:55:28 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id 10AB3202AE for ; Wed, 21 Jan 2015 14:55:27 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1752971AbbAUOzZ (ORCPT ); Wed, 21 Jan 2015 09:55:25 -0500 Received: from mail-la0-f49.google.com ([209.85.215.49]:49629 "EHLO mail-la0-f49.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1752938AbbAUOzW (ORCPT ); Wed, 21 Jan 2015 09:55:22 -0500 Received: by mail-la0-f49.google.com with SMTP id hs14so40683118lab.8 for ; Wed, 21 Jan 2015 06:55:20 -0800 (PST) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:mime-version:in-reply-to:references:date :message-id:subject:from:to:cc:content-type; bh=sy6LHxMFpIaME5IEDOfLZ87QupfMoYEpgRA3C5Bi3o8=; b=CeqwG8l0Yvmtru5ntO3j0a6sWKa4RPcabVP04WstNutJ766qONA0mYciF8IixQaRg1 y5C/hEdkVk3eqzw5COgGZNnb7grTpupkMvmPnK12ZOQY/vCXREUlWhv2bHC/evhcBmoS GqqJ0bde3Z+7vnyMwNcvhv4ZD69YP8kqYdv+S6HyAESEJ5QM5eUUKpzlKCMu+l9+fOKB /xnP1+eXSHWJ55u3OAPW7XAVHwhajFZN315G0KRTsmqhGAMAFAjtK4mofQOs8FLbbue5 anChiTdaQl7kexxJinWU8Zm++KzZYpDQUuSw9P//ZxOH6xRmhTF5vTOPyANfH5wJpjNP nw9w== X-Gm-Message-State: ALoCoQnbXIrg0xVdtS1GohIPoPa4n8yGFPek1/imGFqb72v/sRpJpdxLQm7REe4dedVuCdLAbNrq MIME-Version: 1.0 X-Received: by 10.152.27.228 with SMTP id w4mr44501984lag.75.1421852120144; Wed, 21 Jan 2015 06:55:20 -0800 (PST) Received: by 10.25.31.77 with HTTP; Wed, 21 Jan 2015 06:55:20 -0800 (PST) In-Reply-To: References: <20141125150101.9596a09e.akpm@linux-foundation.org> <20141202144200.a4ca81a46a43563a8874fd8e@linux-foundation.org> <20141204151102.2d7e11dca39f130c2dff2294@linux-foundation.org> Date: Wed, 21 Jan 2015 09:55:20 -0500 Message-ID: Subject: Re: [PATCH v6 0/7] vfs: Non-blockling buffered fs read (page cache only) From: Milosz Tanski To: Volker Lendecke Cc: Andrew Morton , LKML , Christoph Hellwig , "linux-fsdevel@vger.kernel.org" , "linux-aio@kvack.org" , Mel Gorman , Tejun Heo , Jeff Moyer , "Theodore Ts'o" , Al Viro , Linux API , Michael Kerrisk , linux-arch@vger.kernel.org Sender: linux-fsdevel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-fsdevel@vger.kernel.org X-Spam-Status: No, score=-6.9 required=5.0 tests=BAYES_00, RCVD_IN_DNSWL_HI, T_RP_MATCHES_RCVD, UNPARSEABLE_RELAY autolearn=unavailable version=3.3.1 X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on mail.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP On Fri, Dec 5, 2014 at 3:17 AM, Volker Lendecke wrote: > > On Thu, Dec 04, 2014 at 03:11:02PM -0800, Andrew Morton wrote: > > I can see all that, but it's handwaving. Yes, preadv2() will perform > > better in some circumstances than fincore+pread. But how much better? > > Enough to justify this approach, or not? > > > > Alas, the only way to really settle that is to implement fincore() and > > to subject it to a decent amount of realistic quantitative testing. > > > > Ho hum. > > > > Could you please hunt down some libuv developers, see if we can solicit > > some quality input from them? As I said, we really don't want to merge > > this then find that people don't use it for some reason, or that it > > needs changes. > > All I can say from a Samba perspective is that none of the ARM based > Storage boxes I have seen so far do AIO because of the base footprint > for every read. For sequential reads kernel-level readahead could kick > in properly and we should be able to give them the best of both worlds: > No context switches in the default case but also good parallel behaviour > for other workloads. The most important benchmark for those guys is to > read a DVD image, whether it makes sense or not. I just made wanted to share some progress on this. And I apologize for for all these different threads (this, LSF/FS and then Jermey and Volker). I recently implemented cifs support (via libsmbcli) for FIO so I can have some hard numbers on the benchmarks. So all you guys will be seeing more data soon enough. It's going to take a bit of time to put it together because it takes a lot of time to benchmark to make sure we have correct and non-noisy numbers. In the meantime I have some numbers from my first run here: http://i.imgur.com/05SMu8d.jpg Sorry for the link to the image, it was easier. The test case is a single FIO client doing 4K random reads, on localhost smbd server, on a fully cached file for 10 minutes with a 1 minute warm up. Threadpool + preadv2 for fast read does much better in terms of bandwidth and a bit better in terms of latency. Sync is still the fastest, but the gap is narrowed. Not a bad improvement for (Volker's) 9 line change to samba code. Also, I look into why the gap between sync and threadpool + preadv2 is not even smaller. From my preliminary investigation it looks like the async threadpool code path does a lot more work then the sync call... even in the case we do the fast read. According to perf the hotest code userspace (smbd+ library) is malloc + free. So I imagine the optimizing the fast read case to avoid a bunch of extra request allocations will bring us even closer to sync. Again, I'll have and more complex test cases soon just wanted to share progress. I imagine that they'll the gap between threadpool + preadv2 and just threadpool is going to get wider as add more blocking calls into the queue. I'll have number on that as soon as week can. if (req == NULL) { @@ -730,6 +754,14 @@ static struct tevent_req *vfswrap_pread_send(struct vfs_handle_struct *handle, state->asys_ctx = handle->conn->sconn->asys_ctx; state->req = req; + nread = pread2(fsp->fh->fd, data, n, offset, RWF_NONBLOCK); + // TODO: partial reads + if (nread == n) { + state->ret = nread; + tevent_req_done(req); + return tevent_req_post(req, ev); + } + SMBPROFILE_BYTES_ASYNC_START(syscall_asys_pread, profile_p, state->profile_bytes, n); ret = asys_pread(state->asys_ctx, fsp->fh->fd, data, n, offset, req); diff --git a/source3/modules/vfs_default.c b/source3/modules/vfs_default.c index 5634cc0..90348d8 100644 --- a/source3/modules/vfs_default.c +++ b/source3/modules/vfs_default.c @@ -718,6 +741,7 @@ static struct tevent_req *vfswrap_pread_send(struct vfs_handle_struct *handle, struct tevent_req *req; struct vfswrap_asys_state *state; int ret; + ssize_t nread; req = tevent_req_create(mem_ctx, &state, struct vfswrap_asys_state);