From patchwork Sun Sep 2 21:20:54 2012 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Dan Merillat X-Patchwork-Id: 1396871 Return-Path: X-Original-To: patchwork-linux-btrfs@patchwork.kernel.org Delivered-To: patchwork-process-083081@patchwork1.kernel.org Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by patchwork1.kernel.org (Postfix) with ESMTP id B188B3FD40 for ; Sun, 2 Sep 2012 21:21:45 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1754954Ab2IBVUz (ORCPT ); Sun, 2 Sep 2012 17:20:55 -0400 Received: from mail-pb0-f46.google.com ([209.85.160.46]:55190 "EHLO mail-pb0-f46.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1754780Ab2IBVUz (ORCPT ); Sun, 2 Sep 2012 17:20:55 -0400 Received: by pbbrr13 with SMTP id rr13so7200820pbb.19 for ; Sun, 02 Sep 2012 14:20:54 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=mime-version:date:message-id:subject:from:to:content-type; bh=kcW2onUYESuWOYhFVHvcHsebzcqVCQ5L/7NU0gHI000=; b=ot2dSIH6EKfx+uOXWLUqREoW98FxclVkpYIMtzVhmAX0XpDnUZSEavZLRw34yg794B icpL8iGYZwHR+jpJDh8jolzqoAfWlu89syQxI25E+J2QKdUfi/kFpw9ZDNGk+TH65C6u rm0wWEmAtoittDSlOtvDRk25QRV1UY2YPtJWiJVHdTuu8rh243nDYSd7HaGpMDOLXYDB DssRFRXvVB//t99yLHeVjTlHIPLg487enyQsS20gNXKjMwZA8QBGpU4NStJjyZuyh/fk YqBHzYpPHzA3CMGhcFddpGaE2eTFt1FR3gm7lHqZppjcXC3qSS6ymcuChCfXKlOlDyKv RZbg== MIME-Version: 1.0 Received: by 10.68.212.101 with SMTP id nj5mr33060622pbc.7.1346620854670; Sun, 02 Sep 2012 14:20:54 -0700 (PDT) Received: by 10.66.86.137 with HTTP; Sun, 2 Sep 2012 14:20:54 -0700 (PDT) Date: Sun, 2 Sep 2012 17:20:54 -0400 Message-ID: Subject: Segregate metadata to SSD? From: Dan Merillat To: BTRFS Mailing list Sender: linux-btrfs-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-btrfs@vger.kernel.org Is it possible to weight the allocations of data/system/metadata so that data goes on large, slow drives while system/metadata goes on a fast SSD? I don't have exact numbers, but I'd guess a vast majority of seeks during operation are lookups of tiny bits of data, while data reads&writes are done in much larger chunks. Obviously a database load would be a different balance, but for most systems it would seem to be a rather vast improvement. Data: total=5625880576k (5.24TB), used=5455806964k (5.08TB) System, DUP: total=32768k (32.00MB), used=724k (724.00KB) System: total=4096k (4.00MB), used=0k (0.00) Metadata, DUP: total=117291008k (111.86GB), used=13509540k (12.88GB) Out of my nearly 6tb setup I could trivially accelerate the whole thing with a 128mb SSD. On a side note, that's a nearly 10:1 metadata overusage and I've never had more than 3 snapshots at a given time - current, rollback1, rollback2 - I think it grew that large during a rebalance. Aside from that, I could get away with a tiny 64gb SSD. pretty_sizes was too granular to use in monitoring scripts, so: the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html diff --git a/cmds-filesystem.c b/cmds-filesystem.c index b1457de..dc5fea6 100644 --- a/cmds-filesystem.c +++ b/cmds-filesystem.c @@ -145,8 +145,9 @@ static int cmd_df(int argc, char **argv) total_bytes = pretty_sizes(sargs->spaces[i].total_bytes); used_bytes = pretty_sizes(sargs->spaces[i].used_bytes); - printf("%s: total=%s, used=%s\n", description, total_bytes, - used_bytes); + printf("%s: total=%ldk (%s), used=%ldk (%s)\n", description, + sargs->spaces[i].total_bytes/1024, total_bytes, + sargs->spaces[i].used_bytes/1024, used_bytes); } free(sargs); -- To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in