环境描述:ceph1和ceph2两个服务器(其中ceph1已经安装好了ceph-deploy)
每个服务器上各一块S3500 SSD、两块日立1T的HDD盘
前后端和管理网络合一:分别为128.128.128.9、128.128.128.10
1、在/etc/hosts中增加:
128.128.128.9 ceph1128.128.128.10 ceph2
2、通过
ssh-keygen和ssh-copy-id
使节点之间免密
3、在ceph1中新建一个目录,保留配置文件
mkdir /root/myclustercd /root/mycluster
4、安装ceph
ceph-deploy install ceph1 ceph2
5、安装mon
ceph-deploy new ceph1
6、修改ceph.conf配置文件
public_network = 128.128.128.0/24cluster_network = 128.128.128.0/24enable experimental unrecoverable data corrupting features = bluestore rocksdb debug_white_box_testing_ec_overwritesbluestore block db size = 10737418240 #10Gbluestore block wal size = 10737418240 #10Gosd objectstore = bluestoremon_allow_pool_delete = truerbd_cache = false[osd]bluestore = true
7、初始化mon
ceph-deploy mon create-initial
若需要增加mon
ceph-deploy add mon ceph2
8、格式化分区
ceph-deploy zap {ceph-node}:{dest-disk}例如:ceph-deploy zap ceph1:/dev/sdb
9、将ceph.bootstrap-osd.keyring拷贝到/var/lib/ceph/bootstrap-osd/下,并命令为ceph.keyring
cp /home/mycluster/ceph.bootstrap-osd.keyring /var/lib/ceph/bootstrap-osd/ceph.keyring或scp /home/mycluster/ceph.bootstrap-osd.keyring ceph2:/var/lib/ceph/bootstrap-osd/ceph.keyring
10、增加osd
ceph-disk prepare --bluestore --block.db /dev/sdb --block.wal /dev/sdb /dev/sdc其中:/dev/sdb为SSD盘,/dev/sdc为HDD盘