作者:周洪飞,新炬网络高级技术专家。
前言
MongoDB 的Sharding机制解决了海量存储和动态扩容的问题,但离实际生产环境所需要的高可靠、高可用还有些距离,例如Shard Server的单点故障就无法解决,所以提出”ReplicatSets + Sharding”的解决方案。本方案是某某公司真实实例介绍采用MongoDB复制集和Sharding高可能用方案。本方案采用MongoDB 3.06版本。
MongoDB3.0以上版本提升7到10倍的写入效率以及增加80%的数据压缩率,还能减少95%的运维成本 。
MonoDB3.0新特性主括:可插入式的存储引擎API、支持WiredTiger存储引擎、MMAPv1提升、复制集全面提升、集群方面的改进、提升了安全性。
MongoDB3.0以上版本拥有大幅度的改进,本作者所以采用最新的3.06版本讲解。本实例采用最新的配置文件,在官方网可以看到。
1.Relica Sets+sharding架构
Replica Sets+sharding解决方案内容如下:
Shard服务器:使用Replica Sets确保每个数据节点都具有备份、自动容错转移、自动恢复能力;
配置服务器:使用3个配置服务器确保元数据完整性;
路由进程:使用3个路由进程实现负载均衡,提高客户端接入性能。
配置完成的Replica Sets+sharding环境如下图所示:
2.搭建一个高可用架构
采用 Replica Sets+sharding 架构,可以避免单机Sharding架构中的Shard Server单点故障,这方案组合解决的sharding架构中的高可用问题。
各服务器开放的监听端口如图所示。
	
	| 主机 | 
	IP | 
	服务及端口 | 
	
	
	| Mongodb01 | 
	172.16.202.201 | 
	Mongod shard1_1   11731 | 
	
	
	| Mongod shard2_1   11732 | 
	
	
	| Mongod shard3_1   11733 | 
	
	
	| Mongod config     30000 | 
	
	
	| Mongos   1        60000 | 
	
	
	| Mongodb02 | 
	172.16.202.202 | 
	Mongod shard1_2   11731 | 
	
	
	| Mongod shard2_2   11732 | 
	
	
	| Mongod shard3_2   11733 | 
	
	
	| Mongod config     30000 | 
	
	
	| Mongos   2       60000 | 
	
	
	| Mongodb03 | 
	172.16.202.203 | 
	Mongod shard1_3   11731 | 
	
	
	| Mongod shard2_3   11732 | 
	
	
	| Mongod shard3_3   11733 | 
	
	
	| Mongod config     30000 | 
	
	
	| Mongos   3       60000 | 
	
2.1.创建mongo用户
在三台服务器中创建mongo用户,如下面的代码所示:
[root@mongodb01 ~]# useradd mongo
[root@mongodb01 ~]#passwd mongo
 [root@mongodb01 ~]# su - mongo
[mongo@mongodb01 ~]$
2.2.创建数据目录
  首先要在mongo用户下创建shard server和Config Server的数据目录,用于存储数据,创建logs的日志目录、创建config存放配置文件目录。
在mongodb01上创建shard server和Config Server的数据目录、logs的日志目录、config存放配置文件目录。
[mongo@mongodb01 ~]$ mkdir -p /home/mongo/data/shard1_1
[mongo@mongodb01 ~]$ mkdir -p /home/mongo/data/shard2_1
[mongo@mongodb01 ~]$ mkdir -p /home/mongo/data/shard3_1
[mongo@mongodb01 ~]$ mkdir -p /home/mongo/data/config
[mongo@mongodb01 ~]$ mkdir -p /home/mongo/data/logs
[mongo@mongodb01 ~]$ mkdir -p /home/mongo/config 
如以上代码所示,目录/home/mongo/data/shard1_1供 shard1主节点使用,目录/home/mongo/data/shard2_1供shard2仲裁使用,目录/home/mongo/data/shard3_1 供shard3副本使用,目录/home/mongo/data/config 供整个Replica Sets+sharding架构中的1个config Server使用,目录/home/mongo/data/logs供日志使用,
目录/home/mongo/config供配置文件使用。
在mongodb02上创建shard server和Config Server的数据目录、logs的日志目录、config存放配置文件目录。
[mongo@mongodb02 ~]$ mkdir -p /home/mongo/data/shard1_2
[mongo@mongodb02 ~]$ mkdir -p /home/mongo/data/shard2_2
[mongo@mongodb02 ~]$ mkdir -p /home/mongo/data/shard3_2
[mongo@mongodb02 ~]$ mkdir -p /home/mongo/data/config
[mongo@mongodb02 ~]$ mkdir -p /home/mongo/data/logs
[mongo@mongodb02 ~]$ mkdir -p /home/mongo/config 
如以上代码所示,目录/home/mongo/data/shard1_2供 shard1副本使用,目录/home/mongo/data/shard2_2供shard2主节点使用,目录/home/mongo/data/shard3_1 供shard3仲裁使用,目录/home/mongo/data/config 供整个Replica Sets+sharding架构中的1个config Server使用,目录/home/mongo/data/logs供日志使用,
目录/home/mongo/config供配置文件使用。
在mongodb03上创建shard server和Config Server的数据目录、logs的日志目录、config存放配置文件目录。
[mongo@mongodb03 ~]$ mkdir -p /home/mongo/data/shard1_3
[mongo@mongodb03 ~]$ mkdir -p /home/mongo/data/shard2_3
[mongo@mongodb03 ~]$ mkdir -p /home/mongo/data/shard3_3
[mongo@mongodb03 ~]$ mkdir -p /home/mongo/data/config
[mongo@mongodb03 ~]$ mkdir -p /home/mongo/data/logs
[mongo@mongodb03 ~]$ mkdir -p /home/mongo/config 
如以上代码所示,目录/home/mongo/data/shard1_3供 shard1仲裁使用,目录/home/mongo/data/shard2_3供shard2副本使用,目录/home/mongo/data/shard3_3 供shard3主节点使用,目录/home/mongo/data/config 供整个Replica Sets+sharding架构中的1个config Server使用,目录/home/mongo/data/logs供日志使用,
目录/home/mongo/config供配置文件使用。
2.3.配置Replica Sets
在三台服务器上解压mongodb-linux-x86_64-3.0.6.tgz
[mongo@mongodb01 ~]$ tar zxvf mongodb-linux-x86_64-3.0.6.tgz
[mongo@mongodb01 ~]$ mv mongodb-linux-x86_64-3.0.6 mongodb
2.3.1.配置shard1所用到的Relica Set 1
  在mongodb01上操作,如下的代码所示:
[mongo@mongodb01 ~]$ cd config/
[mongo@mongodb01 config]$ cat shard1_1.conf
systemLog:
 destination: file
 ##Log
 path: /home/mongo/data/logs/shard1_1.log
 logAppend: true
storage:
 journal:
  enabled: true
 dbPath: /home/mongo/data/shard1_1
 directoryPerDB: true
 engine: wiredTiger
 wiredTiger:
  engineConfig:
   cacheSizeGB: 1
   directoryForIndexes: true
  collectionConfig:
   blockCompressor: snappy
processManagement:
  fork: true 
net:
 bindIp: 172.16.202.201
 port: 11731
replication:
 oplogSizeMB: 500
 replSetName: shard1
sharding:
 clusterRole: shardsvr
#sercurity:
 #authorization: enabled
 #keyFile: /home/mongo/key/security
 
[mongo@mongodb01 config]$ /home/mongo/mongodb/bin/mongod -f /home/mongo/config/shard1_1.conf
如以上代码所示,在mongodb01启动Replica Set1中的1个成员节点,复制集名字是shard1,监听端口是11731。
  在mongodb02上操作,如下的代码所示:
[mongo@mongodb02 ~]$ cd config/
[mongo@mongodb02 config]$ cat shard1_2.conf
systemLog:
 destination: file
 ##Log
 path: /home/mongo/data/logs/shard1_2.log
 logAppend: true
storage:
 journal:
  enabled: true
 dbPath: /home/mongo/data/shard1_2
 directoryPerDB: true
 engine: wiredTiger
 wiredTiger:
  engineConfig:
   cacheSizeGB: 1
   directoryForIndexes: true
  collectionConfig:
   blockCompressor: snappy
processManagement:
  fork: true 
net:
 bindIp: 172.16.202.202
 port: 11731
replication:
 oplogSizeMB: 500
 replSetName: shard1
sharding:
 clusterRole: shardsvr
#sercurity:
 #authorization: enabled
 #keyFile: /home/mongo/key/security
 
[mongo@mongodb02 config]$ /home/mongo/mongodb/bin/mongod -f /home/mongo/config/shard1_2.conf
如以上代码所示,在mongodb2启动Replica Set1中的1个成员节点,复制集名字是shard1,监听端口是11731。
  在mongodb03上操作,如下的代码所示:
[mongo@mongodb03~]$ cd config/
[mongo@mongodb03 config]$ cat shard1_3.conf
systemLog:
 destination: file
 ##Log
 path: /home/mongo/data/logs/shard1_3.log
 logAppend: true
storage:
 journal:
  enabled: true
 dbPath: /home/mongo/data/shard1_3
 directoryPerDB: true
 engine: wiredTiger
 wiredTiger:
  engineConfig:
   cacheSizeGB: 1
   directoryForIndexes: true
  collectionConfig:
   blockCompressor: snappy
processManagement:
  fork: true 
net:
 bindIp: 172.16.202.203
 port: 11731
replication:
 oplogSizeMB: 500
 replSetName: shard1
sharding:
 clusterRole: shardsvr
#sercurity:
 #authorization: enabled
 #keyFile: /home/mongo/key/security
 
[mongo@mongodb03 config]$ /home/mongo/mongodb/bin/mongod -f /home/mongo/config/shard1_3.conf
如以上代码所示,在mongodb3启动Replica Set1中的1个成员节点,复制集名字是shard1,监听端口是11731。
连接mongodb01的11731端口的mongod,初始化Replicat Set1,如下代码所示:
[mongo@mongodb01 ~]$ /home/mongo/mongodb/bin/mongo 172.16.202.201:11731
MongoDB shell version: 3.0.6
connecting to: 172.16.202.201:11731/test
> config={_id:'shard1',members:[{_id:0,host:'172.16.202.201:11731',priority:2},{_id:1,host:'172.16.202.202:11731'},{_id:2,host:'172.16.202.203:11731',arbiterOnly:true}]}
{
"_id" : "shard1",
"members" : [
           {
                    "_id" : 0,
                    "host" : "172.16.202.201:11731",
                    "priority" : 2
           },
           {
                    "_id" : 1,
                    "host" : "172.16.202.202:11731"
           },
           {
                    "_id" : 2,
                    "host" : "172.16.202.203:11731",
                    "arbiterOnly" : true
           }
]
}
> rs.initiate(config)
{ "ok" : 1 }
 
以上代码通过执行rs.initiate(config)命令来初始化shard1的复制集Replica Set 1。
2.3.2.配置shard2所用到的Relica Set 2
  在mongodb01上操作,如下的代码所示:
[mongo@mongodb01 ~]$ cd config/
[mongo@mongodb01 config]$ cat shard2_1.conf
systemLog:
 destination: file
 ##Log
 path: /home/mongo/data/logs/shard2_1.log
 logAppend: true
storage:
 journal:
  enabled: true
 dbPath: /home/mongo/data/shard2_1
 directoryPerDB: true
 engine: wiredTiger
 wiredTiger:
  engineConfig:
   cacheSizeGB: 1
   directoryForIndexes: true
  collectionConfig:
   blockCompressor: snappy
processManagement:
  fork: true 
net:
 bindIp: 172.16.202.201
 port: 11732
replication:
 oplogSizeMB: 500
 replSetName: shard2
sharding:
 clusterRole: shardsvr
#sercurity:
 #authorization: enabled
 #keyFile: /home/mongo/key/security
 
[mongo@mongodb01 config]$ /home/mongo/mongodb/bin/mongod -f /home/mongo/config/shard2_1.conf
如以上代码所示,在mongodb01启动Replica Set1中的1个成员节点,复制集名字是shard2,监听端口是11732。
  在mongodb02上操作,如下的代码所示:
[mongo@mongodb02 ~]$ cd config/
[mongo@mongodb02 config]$ cat shard2_2.conf
systemLog:
 destination: file
 ##Log
 path: /home/mongo/data/logs/shard2_2.log
 logAppend: true
storage:
 journal:
  enabled: true
 dbPath: /home/mongo/data/shard2_2
 directoryPerDB: true
 engine: wiredTiger
 wiredTiger:
  engineConfig:
   cacheSizeGB: 1
   directoryForIndexes: true
  collectionConfig:
   blockCompressor: snappy
processManagement:
  fork: true 
net:
 bindIp: 172.16.202.202
 port: 11732
replication:
 oplogSizeMB: 500
 replSetName: shard2
sharding:
 clusterRole: shardsvr
#sercurity:
 #authorization: enabled
 #keyFile: /home/mongo/key/security
 
[mongo@mongodb02 config]$ /home/mongo/mongodb/bin/mongod -f /home/mongo/config/shard2_2.conf
如以上代码所示,在mongodb2启动Replica Set1中的1个成员节点,复制集名字是shard2,监听端口是11732。
  在mongodb03上操作,如下的代码所示:
[mongo@mongodb03~]$ cd config/
[mongo@mongodb03 config]$ cat shard2_3.conf
systemLog:
 destination: file
 ##Log
 path: /home/mongo/data/logs/shard2_3.log
 logAppend: true
storage:
 journal:
  enabled: true
 dbPath: /home/mongo/data/shard2_3
 directoryPerDB: true
 engine: wiredTiger
 wiredTiger:
  engineConfig:
   cacheSizeGB: 1
   directoryForIndexes: true
  collectionConfig:
   blockCompressor: snappy
processManagement:
  fork: true 
net:
 bindIp: 172.16.202.203
 port: 11732
replication:
 oplogSizeMB: 500
 replSetName: shard2
sharding:
 clusterRole: shardsvr
#sercurity:
 #authorization: enabled
 #keyFile: /home/mongo/key/security
 
[mongo@mongodb03 config]$ /home/mongo/mongodb/bin/mongod -f /home/mongo/config/shard2_3.conf
如以上代码所示,在mongodb3启动Replica Set1中的1个成员节点,复制集名字是shard2,监听端口是11732。
连接mongodb02的11732端口的mongod,初始化Replicat Set1,如下代码所示:
[mongo@mongodb02 config]$ /home/mongo/mongodb/bin/mongo 172.16.202.202:11732
MongoDB shell version: 3.0.6
connecting to: 172.16.202.202:11732/test
> config={_id:'shard2',members:[{_id:0,host:'172.16.202.201:11732',arbiterOnly:true},{_id:1,host:'172.16.202.202:11732',priority:2},{_id:2,host:'172.16.202.203:11732'}]}
{
"_id" : "shard2",
"members" : [
           {
                    "_id" : 0,
                    "host" : "172.16.202.201:11732",
                    "arbiterOnly" : true
           },
           {
                    "_id" : 1,
                    "host" : "172.16.202.202:11732",
                    "priority" : 2
           },
           {
                    "_id" : 2,
                    "host" : "172.16.202.203:11732"
           }
]
}
>  rs.initiate(config)
{ "ok" : 1 }
以上代码通过执行rs.initiate(config)命令来初始化shard2的复制集Replica Set 1。
2.3.3.配置shard3所用到的Relica Set 3
  在mongodb01上操作,如下的代码所示:
[mongo@mongodb01 ~]$ cd config/
[mongo@mongodb01 config]$ cat shard3_1.conf
systemLog:
 destination: file
 ##Log
 path: /home/mongo/data/logs/shard3_1.log
 logAppend: true
storage:
 journal:
  enabled: true
 dbPath: /home/mongo/data/shard3_1
 directoryPerDB: true
 engine: wiredTiger
 wiredTiger:
  engineConfig:
   cacheSizeGB: 1
   directoryForIndexes: true
  collectionConfig:
   blockCompressor: snappy
processManagement:
  fork: true 
net:
 bindIp: 172.16.202.201
 port: 11733
replication:
 oplogSizeMB: 500
 replSetName: shard3
sharding:
 clusterRole: shardsvr
#sercurity:
 #authorization: enabled
 #keyFile: /home/mongo/key/security
 
 [mongo@mongodb01 config]$ /home/mongo/mongodb/bin/mongod -f /home/mongo/config/shard3_1.conf
如以上代码所示,在mongodb01启动Replica Set1中的1个成员节点,复制集名字是shard3,监听端口是11733。
  在mongodb02上操作,如下的代码所示:
[mongo@mongodb02 ~]$ cd config/
[mongo@mongodb02 config]$ cat shard3_2.conf
systemLog:
 destination: file
 ##Log
 path: /home/mongo/data/logs/shard3_2.log
 logAppend: true
storage:
 journal:
  enabled: true
 dbPath: /home/mongo/data/shard3_2
 directoryPerDB: true
 engine: wiredTiger
 wiredTiger:
  engineConfig:
   cacheSizeGB: 1
   directoryForIndexes: true
  collectionConfig:
   blockCompressor: snappy
processManagement:
  fork: true 
net:
 bindIp: 172.16.202.202
 port: 11733
replication:
 oplogSizeMB: 500
 replSetName: shard3
sharding:
 clusterRole: shardsvr
#sercurity:
 #authorization: enabled
 #keyFile: /home/mongo/key/security
 
[mongo@mongodb02 config]$ /home/mongo/mongodb/bin/mongod -f /home/mongo/config/shard3_2.conf
如以上代码所示,在mongodb2启动Replica Set1中的1个成员节点,复制集名字是shard3,监听端口是11733。
  在mongodb03上操作,如下的代码所示:
[mongo@mongodb03~]$ cd config/
[mongo@mongodb03 config]$ cat shard3_3.conf
systemLog:
 destination: file
 ##Log
 path: /home/mongo/data/logs/shard3_3.log
 logAppend: true
storage:
 journal:
  enabled: true
 dbPath: /home/mongo/data/shard3_3
 directoryPerDB: true
 engine: wiredTiger
 wiredTiger:
  engineConfig:
   cacheSizeGB: 1
   directoryForIndexes: true
  collectionConfig:
   blockCompressor: snappy
processManagement:
  fork: true 
net:
 bindIp: 172.16.202.203
 port: 11733
replication:
 oplogSizeMB: 500
 replSetName: shard3
sharding:
 clusterRole: shardsvr
#sercurity:
 #authorization: enabled
 #keyFile: /home/mongo/key/security
 
[mongo@mongodb03 config]$ /home/mongo/mongodb/bin/mongod -f /home/mongo/config/shard3_3.conf
如以上代码所示,在mongodb3启动Replica Set1中的1个成员节点,复制集名字是shard3,监听端口是11733。
连接mongodb03的11733端口的mongod,初始化Replicat Set1,如下代码所示:
[mongo@mongodb03 config]$ /home/mongo/mongodb/bin/mongo 172.16.202.203:11733
MongoDB shell version: 3.0.6
connecting to: 172.16.202.203:11733/test
> config={_id:'shard3',members:[{_id:0,host:'172.16.202.201:11733'},{_id:1,host:'172.16.202.202:11733',arbiterOnly:true},{_id:2,host:'172.16.202.203:11733',priority:2}]}
{
"_id" : "shard3",
"members" : [
           {
                    "_id" : 0,
                    "host" : "172.16.202.201:11733"
           },
           {
                    "_id" : 1,
                    "host" : "172.16.202.202:11733",
                    "arbiterOnly" : true
           },
           {
                   "_id" : 2,
                    "host" : "172.16.202.203:11733",
                    "priority" : 2
           }
]
}
> rs.initiate(config)
{ "ok" : 1 }
 以上代码通过执行rs.initiate(config)命令来初始化shard2的复制集Replica Set 1。
2.3.4.查看复制集状态
shard1:PRIMARY> rs.status()
{
"set" : "shard1",
"date" : ISODate("2015-11-25T10:53:06.091Z"),
"myState" : 1,
"members" : [
           {
                    "_id" : 0,
                    "name" : "172.16.202.201:11731",
                    "health" : 1,
                    "state" : 1,
                    "stateStr" : "PRIMARY", #主库
                    "uptime" : 3009,
                    "optime" : Timestamp(1448448493, 1),
                    "optimeDate" : ISODate("2015-11-25T10:48:13Z"),
                    "electionTime" : Timestamp(1448448497, 1),
                    "electionDate" : ISODate("2015-11-25T10:48:17Z"),
                    "configVersion" : 1,
                    "self" : true
           },
           {
                    "_id" : 1,
                    "name" : "172.16.202.202:11731",
                    "health" : 1,
                    "state" : 2,
                    "stateStr" : "SECONDARY", #复本
                    "uptime" : 292,
                    "optime" : Timestamp(1448448493, 1),
                    "optimeDate" : ISODate("2015-11-25T10:48:13Z"),
                    "lastHeartbeat" : ISODate("2015-11-25T10:53:05.389Z"),
                    "lastHeartbeatRecv" : ISODate("2015-11-25T10:53:05.391Z"),
                    "pingMs" : 0,
                    "lastHeartbeatMessage" : "could not find member to sync from",
                    "configVersion" : 1
           },
           {
                    "_id" : 2,
                    "name" : "172.16.202.203:11731",
                    "health" : 1,
                    "state" : 7,
                    "stateStr" : "ARBITER",  #仲裁
                    "uptime" : 292,
                    "lastHeartbeat" : ISODate("2015-11-25T10:53:05.391Z"),
                    "lastHeartbeatRecv" : ISODate("2015-11-25T10:53:05.390Z"),
                    "pingMs" : 0,
                    "configVersion" : 1
           }
],
"ok" : 1
}
2.4.配置3台Config Server
三台上执行操作如下代码所示:
[mongo@mongodb01 config]$ cat config.conf
systemLog:
 destination: file
 ##Log
 path: /home/mongo/data/logs/config.log
 logAppend: true
storage:
 journal:
  enabled: true
 dbPath: /home/mongo/data/config
 directoryPerDB: true
processManagement:
  fork: true 
net:
 #bindIp: 172.16.202.201 #这里根据自己来是否绑定IP
 port: 30000
sharding:
 clusterRole: configsvr
 
 [mongo@mongodb01 config]$ /home/mongo/mongodb/bin/mongos -f /home/mongo/config/config.conf
如以上代码所示,在三台服务器上分别执行启动Config Server进程,并指定此进程监听是30000
2.5.配置3台Route Process
三台上执行操作如下代码所示:
[mongo@mongodb01 config]$ cat mongos.conf
systemLog:
 destination: file
 ##Log
 path: /home/mongo/data/logs/mongo.log
 logAppend: true
 
processManagement:
  fork: true 
net:
 #bindIp: 172.16.202.201
 port: 6000
sharding:
 configDB: 172.16.202.201:30000,172.16.202.202:30000,172.16.202.203:30000
 
[mongo@mongodb01 config]$ /home/mongo/mongodb/bin/mongos -f /home/mongo/config/mongos.conf
 
如以上代码所示,在三台服务器上分别启动路由控制器,并指定此进程监听端口是60000,同时指定三台服务器上的Config Server的IP和端口。
2.6.配置Shard Cluster
连接到其中一台机器的端口60000的mongos进程,并切换到admin数据到开始配置Sharding环境,如下面的代码所示:
[mongo@mongodb01 logs]$ /home/mongo/mongodb/bin/mongo 172.16.202.201:60000
MongoDB shell version: 3.0.6
connecting to: 172.16.202.201:60000/test
mongos> use admin
switched to db admin
mongos>db.runCommand({addshard:"shard1/172.16.202.201:11731,172.16.202.202:11731,172.16.202.203:11731", name:"shard1"});
{ "shardAdded" : "shard1", "ok" : 1 }
mongos>db.runCommand({addshard:"shard2/172.16.202.201:11732,172.16.202.202:11732,172.16.202.203:11732", name:"shard2"});
{ "shardAdded" : "shard2", "ok" : 1 }
mongos>db.runCommand({addshard:"shard3/172.16.202.201:11733,172.16.202.202:11733,172.16.202.203:11733", name:"shard3"});
{ "shardAdded" : "shard3", "ok" : 1 }
以上代码通过执行以下命令:
db.runCommand({addshard:"shard1/172.16.202.201:11731,172.16.202.202:11731,172.16.202.203:11731", name:"shard1"});
将Replica Set 1在三台服务器上的3个成员节点作为Shard Server 1添加进sharding环境中。
以上代码通过执行以下命令:
db.runCommand({addshard:"shard2/172.16.202.201:11732,172.16.202.202:11732,172.16.202.203:11732", name:"shard2"});
将Replica Set 2在三台服务器上的3个成员节点作为Shard Server 2添加进sharding环境中。
以上代码通过执行以下命令:
db.runCommand({addshard:"shard3/172.16.202.201:11733,172.16.202.202:11733,172.16.202.203:11733", name:"shard3"});
将Replica Set 3在三台服务器上的3个成员节点作为Shard Server 3添加进sharding环境中。
接下来激活分片,如下面的代码所示:    采用hash分片
[mongo@mongodb01 logs]$ /home/mongo/mongodb/bin/mongo 172.16.202.201:60000
MongoDB shell version: 3.0.6
connecting to: 172.16.202.201:60000/test
mongos> use admin
switched to db admin
mongos> db.runCommand({enablesharding:"logs"})
{ "ok" : 1 }
mongos>db.runCommand({shardcollection:"logs.users",key:{id:"hashed"}})
{ "collectionsharded" : "logs.users", "ok" : 1 }
如以上代码所示,首先执行db.runCommand({enablesharding:"logs"})命令激活logs库上的分片功能;然后执行
db.runCommand({shardcollection:"logs.users",key:{id:"hashed"}})
命令激活users表的分片功能。
2.7.查看分片
mongos> sh.status()
--- Sharding Status ---
  sharding version: {
"_id" : 1,
"minCompatibleVersion" : 5,
"currentVersion" : 6,
"clusterId" : ObjectId("56559b076bc525804dde4141")
}
  shards:
{  "_id" : "shard1",  "host" : "shard1/172.16.202.201:11731,172.16.202.202:11731" }
{  "_id" : "shard2",  "host" : "shard2/172.16.202.202:11732,172.16.202.203:11732" }
{  "_id" : "shard3",  "host" : "shard3/172.16.202.201:11733,172.16.202.203:11733" }
  balancer:
Currently enabled:  yes
Currently running:  no
Failed balancer rounds in last 5 attempts:  0
Migration Results for the last 24 hours:
           2 : Success
           1 : Failed with error 'could not acquire collection lock for logs.users to migrate chunk [{ : MinKey },{ : MaxKey }) :: caused by :: Lock for migrating chunk [{ : MinKey }, { : MaxKey }) in logs.users is taken.', from shard1 to shard2
           2 : Failed with error 'migration already in progress', from shard1 to shard2
  databases:
{  "_id" : "admin",  "partitioned" : false,  "primary" : "config" }
{  "_id" : "logs",  "partitioned" : true,  "primary" : "shard1" }
           logs.users
                    shard key: { "id" : "hashed" }
                    chunks:
                             shard1        2
                             shard2        2
                             shard3        2
                    { "id" : { "$minKey" : 1 } } -->> { "id" : NumberLong("-6148914691236517204") } on : shard1 Timestamp(3, 2)
                    { "id" : NumberLong("-6148914691236517204") } -->> { "id" : NumberLong("-3074457345618258602") } on : shard1 Timestamp(3, 3)
                    { "id" : NumberLong("-3074457345618258602") } -->> { "id" : NumberLong(0) } on : shard3 Timestamp(3, 4)
                    { "id" : NumberLong(0) } -->> { "id" : NumberLong("3074457345618258602") } on : shard3 Timestamp(3, 5)
                    { "id" : NumberLong("3074457345618258602") } -->> { "id" : NumberLong("6148914691236517204") } on : shard2 Timestamp(3, 6)
                    { "id" : NumberLong("6148914691236517204") } -->> { "id" : { "$maxKey" : 1 } } on : shard2 Timestamp(3, 7)