系统优化配置
Limit配置
$ cat /etc/security/limits.conf * soft nofile 64000 * hard nofile 64000 * soft nproc 64000 * hard nproc 64000
|
关闭selinux
$ setenforce 0 $ vi /etc/sysconfig/selinux SELINUX=disabled
|
禁用transparent_hugepage
$ echo never > /sys/kernel/mm/transparent_hugepage/enabled $ echo never > /sys/kernel/mm/transparent_hugepage/defrag $ echo "echo never > /sys/kernel/mm/transparent_hugepage/enabled" >> /etc/rc.local $ echo "echo never > /sys/kernel/mm/transparent_hugepage/defrag" >> /etc/rc.local
|
在启动mongodb时禁用NUMA
$ sysctl -w vm.zone_reclaim_mode = 0 $ numactl --interleave=all /usr/local/mongodb/bin/mongod .....
|
设置readhead
$ blockdev --setra 0 /dev/mapper/dbvg-dblv(lv名称)
|
Tips: MMAPv1引擎推荐32或16,WiredTiger推荐设置为0,3.2后默认WiredTiger
设置磁盘调度策略
$ grubby --update-kernel=ALL --args="elevator=noop"
|
Tips: 虚拟化或SSD建议使用noop
内核参数设置
$ cat >> /etc/sysctl.conf << EOF fs.file-max=98000 kernel.pid_max=64000 kernel.threads-max=64000 vm.max_map_count=128000 EOF
|
设置TCP KEEPALIVE
$ sysctl -w net.ipv4.tcp_keepalive_time=300 $ echo "net.ipv4.tcp_keepalive_time=300" >> /etc/sysctl.conf
|
关闭tuned(虚拟化环境)
MongoDB安装
解压安装包
$ tar -xvf mongodb-linux-x86_64-rhel62-4.0.0.tgz -C /usr/local/ $ mv /usr/local/mongodb-linux-x86_64-rhel62-4.0.0 /usr/local/mongodb
|
创建数据目录
$ mkdir /service/mongodb/data $ mkdir /service/mongodb/log
|
配置参数文件
systemLog: destination: file path: "/service/mongodb/shard1/logs/mongod.log" logAppend: true storage: dbPath: "/service/mongodb/shard1/data" journal: enabled: true directoryPerDB: true engine: wiredTiger wiredTiger: engineConfig: directoryForIndexes: true processManagement: fork: true pidFilePath: "/service/mongodb/shard1/data/mongod.pid" net: bindIpAll: true port: 20000 maxIncomingConnections: 10000 operationProfiling: mode: slowOp slowOpThresholdMs: 200 setParameter: enableLocalhostAuthBypass: false replication: oplogSizeMB: 2048 replSetName: shard1
|
配置环境变量
$ echo "export PATH=$PATH:/usr/local/mongodb/bin" >>/etc/profile
|
启动数据库
$ numactl --interleave=all /usr/local/mongodb/bin/mongod --config /service/mongodb/shard1/conf/shard1.conf
|
replset副本集配置
初始化副本集
$ mongo -u "root" -p "abcd123#" --authenticationDatabase "admin" --host 10.240.204.157 --port 27017 > config = {_id: 'repl01', members: [{_id: 0, host: '10.240.204.157:27017'},{_id: 1, host: '10.240.204.149:27017'},{_id: 2, host: '10.240.204.165:27017', arbiterOnly: true}]} > rs.initiate(config)
|
查看副本集状态
repl01:PRIMARY>rs.status()
|
手动切换
在一些情况下,需要对主节点进行手动切换,就可以使用stepDown命令在主节点执行,随后该节点则退化为副本并选举一个新主节点。
repl01:PRIMARY> rs.stepDown()
|
启动登陆认证
主节点创建root用户
$ mongo --port 27017 >use admin >db.createUser( { user:"root", pwd:"abcd123#", roles:[{role:"root",db:"admin"}] })
|
创建key文件并复制到所有节点
$ openssl rand -base64 756 > /service/mongodb/key/rsa_key $ chmod 600 /service/mongodb/key/rsa_key
|
启用认证登陆和key文件
security: keyFile: "/service/mongodb/key/rsa_key" authorization: enabled
|
重启数据库
$ /usr/local/mongodb/bin/mongod --config /service/mongodb/shard1/conf/shard1.conf --shutdown $ numactl --interleave=all /usr/local/mongodb/bin/mongod --config /service/mongodb/shard1/conf/shard1.conf
|
shard分片配置
shard节点配置
shard节点配置可参考上面副本集配置即可,只是参数文件中需要加入下列参数
sharding: clusterRole: shardsvr
|
Config副本集部署
Config节点用于保存群集的一些配置信息及状态,也采用副本集来保证高可用,需要在参数文件中加入下列参数
sharding: clusterRole: configsvr
|
Mongos路由配置
配置参数文件
$ vi /etc/mongodb/mongos.conf systemLog: destination: file path: "/service/mongos/logs/mongos.log" logAppend: true processManagement: fork: true net: bindIpAll: true port: 50000 sharding: configDB: configSet/10.240.204.157:40000,10.240.204.165:40000,10.240.204.149:40000 security: keyFile: "/service/mongodb/key/rsa_key"
|
配置环境变量
$ echo "export PATH=$PATH:/usr/local/mongodb/bin" >>/etc/profile
|
复制shard的Key文件
[root@t-luhxdb01-p-szzb ~]# scp /service/mongodb/key/rsa_key @10.240.204.157:/service/mongos/key/rsa_key [root@t-luhxdb01-p-szzb ~]# scp /service/mongodb/key/rsa_key @10.240.204.165:/service/mongos/key/rsa_key [root@t-luhxdb01-p-szzb ~]# scp /service/mongodb/key/rsa_key @10.240.204.149:/service/mongos/key/rsa_key
|
启动Mongos
$ numactl --interleave=all /usr/local/mongodb/bin/mongos --config /service/mongodb/mongos/conf/mongos.conf
|
添加分片节点
$ mongo -u "root" -p "abcd123#" --authenticationDatabase "admin" --port 50000 mongos>db.runCommand({addshard:"shard1/10.240.204.157:28000,10.240.204.149:28000,10.240.204.165:28000"}) mongos>db.runCommand({addshard:"shard2/10.240.204.165:29000,10.240.204.157:29000,10.240.204.149:29000"}) mongos>db.runCommand({addshard:"shard3/10.240.204.149:30000,10.240.204.165:30000,10.240.204.157:30000"})
|
查看分片
mongos> db.getSiblingDB("config").shards.find(); { "_id" : "shard1", "host" : "shard1/10.240.204.157:28000,10.240.204.149:29000,10.240.204.165:30000", "state" : 1 } { "_id" : "shard2", "host" : "shard2/10.240.204.165:28000,10.240.204.157:29000,10.240.204.149:30000", "state" : 1 } { "_id" : "shard3", "host" : "shard3/10.240.204.149:28000,10.240.204.165:29000,10.240.204.157:30000", "state" : 1 }
|
数据库分片
mongos>use admin mongos>db.runCommand({enablesharding:"mytest"})
|
集合分片
##哈希分片 mongos>sh.shardCollection("mytest.student",{_id:"hashed"}); ##区间分片 mongos>sh.shardCollection("mytest.student",{_id:1});
|
Tips: 片键需要先创建索引
查看分片状态
参考链接
production-notes
production-checklist-operations