diff mbox series

Documentation: modern versions of ceph are not backed by btrfs

Message ID 20190305123441.22934-1-jlayton@kernel.org (mailing list archive)
State New, archived
Headers show
Series Documentation: modern versions of ceph are not backed by btrfs | expand

Commit Message

Jeffrey Layton March 5, 2019, 12:34 p.m. UTC
Signed-off-by: Jeff Layton <jlayton@kernel.org>
---
 Documentation/filesystems/ceph.txt | 4 +---
 1 file changed, 1 insertion(+), 3 deletions(-)

Comments

Ilya Dryomov March 5, 2019, 1:30 p.m. UTC | #1
On Tue, Mar 5, 2019 at 1:34 PM Jeff Layton <jlayton@kernel.org> wrote:
>
> Signed-off-by: Jeff Layton <jlayton@kernel.org>
> ---
>  Documentation/filesystems/ceph.txt | 4 +---
>  1 file changed, 1 insertion(+), 3 deletions(-)
>
> diff --git a/Documentation/filesystems/ceph.txt b/Documentation/filesystems/ceph.txt
> index 1177052701e1..e5b69bceb033 100644
> --- a/Documentation/filesystems/ceph.txt
> +++ b/Documentation/filesystems/ceph.txt
> @@ -22,9 +22,7 @@ In contrast to cluster filesystems like GFS, OCFS2, and GPFS that rely
>  on symmetric access by all clients to shared block devices, Ceph
>  separates data and metadata management into independent server
>  clusters, similar to Lustre.  Unlike Lustre, however, metadata and
> -storage nodes run entirely as user space daemons.  Storage nodes
> -utilize btrfs to store data objects, leveraging its advanced features
> -(checksumming, metadata replication, etc.).  File data is striped
> +storage nodes run entirely as user space daemons.  File data is striped
>  across storage nodes in large chunks to distribute workload and
>  facilitate high throughputs.  When storage nodes fail, data is
>  re-replicated in a distributed fashion by the storage nodes themselves

Applied.  I updated the links at the bottom as well.

Thanks,

                Ilya
diff mbox series

Patch

diff --git a/Documentation/filesystems/ceph.txt b/Documentation/filesystems/ceph.txt
index 1177052701e1..e5b69bceb033 100644
--- a/Documentation/filesystems/ceph.txt
+++ b/Documentation/filesystems/ceph.txt
@@ -22,9 +22,7 @@  In contrast to cluster filesystems like GFS, OCFS2, and GPFS that rely
 on symmetric access by all clients to shared block devices, Ceph
 separates data and metadata management into independent server
 clusters, similar to Lustre.  Unlike Lustre, however, metadata and
-storage nodes run entirely as user space daemons.  Storage nodes
-utilize btrfs to store data objects, leveraging its advanced features
-(checksumming, metadata replication, etc.).  File data is striped
+storage nodes run entirely as user space daemons.  File data is striped
 across storage nodes in large chunks to distribute workload and
 facilitate high throughputs.  When storage nodes fail, data is
 re-replicated in a distributed fashion by the storage nodes themselves