From patchwork Mon Nov 18 06:05:06 2013 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mark Kirkwood X-Patchwork-Id: 3195451 Return-Path: X-Original-To: patchwork-ceph-devel@patchwork.kernel.org Delivered-To: patchwork-parsemail@patchwork2.web.kernel.org Received: from mail.kernel.org (mail.kernel.org [198.145.19.201]) by patchwork2.web.kernel.org (Postfix) with ESMTP id CAE55C045B for ; Mon, 18 Nov 2013 06:05:18 +0000 (UTC) Received: from mail.kernel.org (localhost [127.0.0.1]) by mail.kernel.org (Postfix) with ESMTP id BDAF3206C4 for ; Mon, 18 Nov 2013 06:05:17 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id 2FF8E205E4 for ; Mon, 18 Nov 2013 06:05:16 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1751203Ab3KRGFO (ORCPT ); Mon, 18 Nov 2013 01:05:14 -0500 Received: from bertrand.catalyst.net.nz ([202.78.240.40]:34835 "EHLO mail.catalyst.net.nz" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1750738Ab3KRGFN (ORCPT ); Mon, 18 Nov 2013 01:05:13 -0500 Received: from localhost (localhost [127.0.0.1]) by mail.catalyst.net.nz (Postfix) with ESMTP id 6AB7E33122; Mon, 18 Nov 2013 19:05:09 +1300 (NZDT) X-Virus-Scanned: Debian amavisd-new at catalyst.net.nz Received: from mail.catalyst.net.nz ([127.0.0.1]) by localhost (bertrand.catalyst.net.nz [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id xANFmES3ANXj; Mon, 18 Nov 2013 19:05:06 +1300 (NZDT) Received: from [192.168.1.64] (122-60-64-230.jetstream.xtra.co.nz [122.60.64.230]) (Authenticated sender: mark.kirkwood) by mail.catalyst.net.nz (Postfix) with ESMTPSA id 4819D33110; Mon, 18 Nov 2013 19:05:06 +1300 (NZDT) Message-ID: <5289AE12.6020102@catalyst.net.nz> Date: Mon, 18 Nov 2013 19:05:06 +1300 From: Mark Kirkwood User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:24.0) Gecko/20100101 Thunderbird/24.1.0 MIME-Version: 1.0 To: Alfredo Deza , "Dave (Bob)" CC: ceph-devel Subject: Could ceph-deploy handle unknown or custom distribution? (Was: Mourning the demise of mkcephfs) References: <5281192D.4090707@bob-the-boat.me.uk> <5282F309.2030104@catalyst.net.nz> In-Reply-To: <5282F309.2030104@catalyst.net.nz> Sender: ceph-devel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: ceph-devel@vger.kernel.org X-Spam-Status: No, score=-7.4 required=5.0 tests=BAYES_00, RCVD_IN_DNSWL_HI, RP_MATCHES_RCVD, T_TVD_MIME_EPI, UNPARSEABLE_RELAY autolearn=unavailable version=3.3.1 X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on mail.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP On 13/11/13 16:33, Mark Kirkwood wrote: > I believe he is using a self built (or heavily customized) Linux > installation - so distribution detection is not going to work in this > case. I'm wondering if there could be some sensible fall back for that, > e.g: > > - refuse to install or purge > - assume sysv init > It was raining here on Saturday so I thought I'd take a look at if this was even feasible (see attached). These patches are a rough experiment to see what *might* be involved in getting to a point where you could get a system going on one of the distributions (or a custom build) that we currently do not support. So view as a thought-in-progress rather than even wip :-) It could also be quite the wrong approach - that is ok, just let me know what you think. Hopefully even if it is wrong, it may help stimulate discussion (or patches) for how to do it right! The 1st patch defines an 'unknown' distribution for cases where one of our well known cases is *not* detected. With it applied I can create mons and it will start them directly with ceph-mon. The 2nd and 3rd patch do similarly for osd, defining a type of startup called 'direct' to mark the osd instance with. I needed to patch ceph-disk as well so it could start the resulting osd directly with ceph-osd, and add it into the crushmap at a (hopefully) sensible place. (BTW I did want to do this all in ceph-deploy, but right now it does not know how to get an osd's id number from a disk device - mind you this could be useful to add for things like 'list' and 'destroy'... but I digress). Anyway I have attached a log of me getting a system of 2 Archlinux nodes up. These were KVM guests built identically and ceph (0.72) was compiled from src and installed. Cheers Mark After applying patch for unknown distro ======================================= State: - monitors created and start ok - osd created ok * not started (startup by ceph-disk) * no crush setting (set by init scripts) Test on Archlinux hosts (zor[2,3]), running ceph-deploy from Ubuntu workstation (localhost) 1/ Do src build installing binaries and udev: (zor2) $ sudo make install (zor2) $ sudo cp udev/* /lib/udev/rules.d/ (zor2) $ cd /var/lib/ceph;sudo mkdir bootstrap-mds bootstrap-osd mds mon osd tmp (zor2) $ sudo mkdir /etc/ceph [same for zor3] 2/ Deploy with (patched) ceph-deploy (localhost) $ ceph-deploy new zor2 (localhost) $ ceph-deploy mon create zor2 (localhost) $ ceph-deploy gatherkeys zor2 (localhost) $ ceph-deploy disk zap zor2:/dev/vdb (localhost) $ ceph-deploy disk zap zor3:/dev/vdb (localhost) $ ceph-deploy osd prepare zor2:/dev/vdb (localhost) $ ceph-deploy osd prepare zor3:/dev/vdb (localhost) $ ceph-deploy osd activate zor2:/dev/vdb1 (localhost) $ ceph-deploy osd activate zor3:/dev/vdb1 3/ Check state (localhost) $ ceph -c ceph.conf -k ceph.client.admin.keyring -s cluster de5d8fac-58b1-411a-8047-dd46d0d91246 health HEALTH_OK monmap e1: 1 mons at {zor2=192.168.122.12:6789/0}, election epoch 2, quorum 0 zor2 osdmap e36: 2 osds: 2 up, 2 in pgmap v55: 192 pgs, 3 pools, 0 bytes data, 0 objects 70852 kB used, 6052 MB / 6121 MB avail 192 active+clean (localhost) $ ceph -c ceph.conf -k ceph.client.admin.keyring osd tree # id weight type name up/down reweight -1 2 root default -2 1 host zor3 1 1 osd.1 up 1 -3 1 host zor2 0 1 osd.0 up 1 *** ceph-disk.orig Sun Nov 17 11:34:33 2013 --- ceph-disk Mon Nov 18 13:21:20 2013 *************** *** 96,101 **** --- 96,102 ---- 'upstart', 'sysvinit', 'systemd', + 'direct', 'auto', ] *************** *** 1438,1443 **** --- 1439,1470 ---- 'osd.{osd_id}'.format(osd_id=osd_id), ], ) + elif os.path.exists(os.path.join(path, 'direct')): + # no idea about which init, start directly and set crush location + subprocess.check_call( + args=[ + '/usr/bin/ceph-osd', + #'--cluster', args.cluster, + #'-c', '/etc/ceph/{cluster}.conf'.format(cluster=args.cluster), + '-i', '{osd_id}'.format(osd_id=osd_id), + ], + ) + + # assume weight 1 (wrong but easy to change)! + subprocess.check_call( + args=[ + '/usr/bin/ceph', + #'--cluster', args.cluster, + #'-c', '/etc/ceph/{cluster}.conf'.format(cluster=args.cluster), + 'osd', + 'crush', + 'add', + 'osd.{osd_id}'.format(osd_id=osd_id), + '1', + 'root=default', + 'host={hostname}'.format(hostname=platform.node()), + ], + ) else: raise Error('{cluster} osd.{osd_id} is not tagged with an init system'.format( cluster=cluster, *************** *** 1683,1690 **** --- 1710,1722 ---- (distro, release, codename) = platform.dist() if distro == 'Ubuntu': init = 'upstart' + if distro == 'unknown' or distro == '': + LOG.debug('distro is empty, assuming init is direct') + init = 'direct' else: init = 'sysvinit' + if init == 'direct': + LOG.debug('init is direct') LOG.debug('Marking with init system %s', init) with file(os.path.join(path, init), 'w'):