Customize CRUSH MAP for Ceph – Mixing SSD with SATA on same hosts

Someone might wonder why this is needed? It’s one step further to hyper-convergence and mixing different types of disks in same boxes.
CRUSH is flexible and aware of the topology; will create two pools “ssd” and “sata”.

Prerequisites: ceph cluster must be up and running

 

Backup you existing crushmap

Exporting the compiled crushmap

ceph osd getcrushmap -o default.coloc

Decompile

crushtool -d default.coloc -o default.txt

 

Customizing the crushmap

Copy default crush map and start customizing it

cp default.txt customized.txt

vi customized.txt

 

Your new map should look similar:

# begin crush map
tunable choose_local_tries 0
tunable choose_local_fallback_tries 0
tunable choose_total_tries 50
tunable chooseleaf_descend_once 1
tunable chooseleaf_vary_r 1
tunable straw_calc_version 1

# devices
device 0 osd.0
device 1 osd.1
device 2 osd.2
device 3 osd.3
device 4 osd.4
device 5 osd.5
device 6 osd.6
device 7 osd.7
device 8 osd.8

# types
type 0 osd
type 1 host
type 2 chassis
type 3 rack
type 4 row
type 5 pdu
type 6 pod
type 7 room
type 8 datacenter
type 9 region
type 10 root

# OSD SATAs

host ceph01-sata {
   id -2 # do not change unnecessarily
   # weight 0.000
   alg straw
   hash 0
   item osd.1 weight 1.000
   item osd.2 weight 1.000
}

host ceph02-sata {
   id -3 # do not change unnecessarily
   # weight 0.000
   alg straw
   hash 0
   item osd.4 weight 1.000
   item osd.5 weight 1.000
}

host ceph03-sata {
   id -4 # do not change unnecessarily
   # weight 0.000
   alg straw
   hash 0
   item osd.7 weight 1.000
   item osd.8 weight 1.000
}

# OSD SSDs

host ceph01-ssd {
   id -22 # do not change unnecessarily
   # weight 0.000
   alg straw
   hash 0
   item osd.0 weight 1.000
}

host ceph02-ssd {
   id -23 # do not change unnecessarily
   # weight 0.000
   alg straw
   hash 0
   item osd.3 weight 1.000
}

host ceph03-ssd {
   id -24 # do not change unnecessarily
   # weight 0.000
   alg straw
   hash 0
   item osd.6 weight 1.000
}

# SATA ROOT

root sata {
   id -1 # do not change unnecessarily
   # weight 0.000
   alg straw
   hash 0
   item ceph01-sata weight 2.000
   item ceph02-sata weight 2.000
   item ceph03-sata weight 2.000
}

# SSD ROOT

root ssd {
   id -21 # do not change unnecessarily
   # weight 0.000
   alg straw
   hash 0
   item ceph01-ssd weight 2.000
   item ceph02-ssd weight 2.000
   item ceph03-ssd weight 2.000
}

# SSD RULE

# rules
rule ssd {
   ruleset 0
   type replicated
   min_size 1
   max_size 10
   step take ssd
   step chooseleaf firstn 0 type host
   step emit
}

# SATA RULE

rule sata {
   ruleset 1
   type replicated
   min_size 1
   max_size 10
   step take sata
   step chooseleaf firstn 0 type host
   step emit
}

 

Compile and import the new customized map

crushtool -c customized.txt -o customized.coloc

ceph osd setcrushmap -i customized.coloc

Now check you new topology:

ceph osd map

 

Create the new pools and apply rules

ceph osd pool create ssd 128 128

ceph osd pool create sata 128 128

ceph osd pool set ssd crush_ruleset 0

ceph osd pool set sata crush_ruleset 1

 

Testing your new pools

Create a block device in each pool

rbd create test_sata --size 500G --pool sata --image-feature layering

rbd --image test_sata -p sata info

rbd create test_ssd --size 100G --pool ssd --image-feature layering

rbd --image test_ssd -p ssd info

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s