Introducing Ceph Storage

      No Comments on Introducing Ceph Storage
introduction to ceph

What is Ceph?

Ceph is an open source, distributed, object store designed to provide excellent performance, reliability and scalability.

Through the use of Controlled Replication under Scalable Hashing (CRUSH) algorithm, ceph eliminates the need of centralised metadata and can distribute the load across all nodes in the cluster.

Since CRUSH is an algorithm, data placement is calculated rather than being based on table lookups which enables it to scale to thousands of petabytes without the risk of bottlenecks and the associated single points of failure.

Clients also form direct connections with the server storing the requested data and so there is no centralised bottleneck in the data path.

With the current ever increasing data storage requirements and challenges faced by legacy Raid-based systems, ceph is well place to provide an answer  to all these problems.

 

Types of Ceph Storage

Ceph provides three main types of storage namely

  1. Block via RADOS Block Device(RDB),
  2. File via Ceph File System and
  3. Object via Reliable Autonomous Distributed Object Store (RADOS) gateway which provides simple storage service (S3) and Swift Compatible Storage

Ceph is a pure SDS solution and such means you are free to run it any commodity hardware as long as it provides correct guarantees around data consistency.

How Ceph Works

The core storage layer in ceph is RADOS, an object store on which the higher level storage protocols and built and distributed.

The RADOS layer in ceph comprises a number os OSDs. Each OSD maps to a single physical Hard disk via a basic Host Bus Adapter (HBA).

The other key component of a ceph cluster are the monitors, which are responsible for forming a quorum via the use of paxos. By forming a quorum, the monitors can ensure that they are in a state where they are allowed to make authoritative decisions for the cluster and avoid split brain scenarios. The monitors are not directly involve in the data paths are do not have the same requirements as the OSDs. They are mainly used to provide known cluster states including membership, configuration and statics via various cluster maps.

These cluster maps are used by both the ceph cluster components and clients to describe the cluster topology and enable data to be safely stored in the  right location.

Due to the scale ceph is intended to be operated at, one can appreciate the fact that tracking the state and the placement of every single object in the cluster would become computationally very expensive. Ceph solves this problem by using CRUSH to place objects in to groups of objects names Placement Groups(PGs). This reduces the need to track millions of objects to a much more manageable number in the thousands range.

Advantages of Using Ceph

  1. Performance – Due to the distributed approach, ceph can offer unrestrained performance
  2. Reliability – Ceph is designed to provide a highly fault tolerant storage system by the scale-out nature of its components
  3. Commodity hardware – Ceph is designed to run on commodity hardware which give the ability to design and build a cluster without the premium demanded by the traditional tire1 storage and server vendors.

Ceph Nodes

At the heart of every Ceph deployment is a Ceph storage cluster which consists of the following daemons;

  1. Ceph Monitor – A Ceph monitor maintains a map of the cluster state, including monitor map, OSD map, manager map and CRUSH map.
  2. Ceph Manager – A Ceph Manager daemon is responsible for keeping track of runtime metrics and current state of the ceph cluster including storage utilisation, current performance metrics and system load.
  3. Ceph OSD – A Ceph OSD stores data, handles data replication, recovery, rebalancing and provides some monitoring information to Ceph monitors and Ceph Managers by by checking other ceph OSDs for a heartbeat. At least 3 ceph OSDs are required for redundancy and high availability.
  4. Ceph MDS – A Ceph Metadata server stores metadata on behalf of a Ceph File System

Consider visiting ceph documentation website for detailed information about ceph.

Next we shall look at how to deploy a simple Ceph cluster using Vagrant and Virtual box
The following two tabs change content below.

harun

Harun is a System Administrator with proficient knowledge in cloud technologies such as KVM, Cloudstack, Citrix Xen Server, Openstack and VMware Vsphere Suite, Ceph, NFS, ISCSI

Latest posts by harun (see all)

About harun

Harun is a System Administrator with proficient knowledge in cloud technologies such as KVM, Cloudstack, Citrix Xen Server, Openstack and VMware Vsphere Suite, Ceph, NFS, ISCSI

Leave a Reply

Your email address will not be published. Required fields are marked *