MongoDB sharding cluster deployment details

  • 2021-01-02 22:02:44
  • OfStack

1. Environmental description

1. Our prod environment: The cluster architecture of MongoDB is the deployment of sharding cluster, but at present we do not sharding, that is, all data is on one sharding. In the later period, if the quantity is large and needs to be allocated, the cluster can be sharded at any time, which is transparent to the business side

2. Deployment of each role

角色 IP 端口 复制集名称
mongos 172.21.244.101,172.21.244.102,172.21.244.94 27000
config server 172.21.244.101,172.21.244.102,172.21.244.94 27100 repl_configsvr
存储节点(shard) 172.21.244.101,172.21.244.102,172.21.244.94 27101 shard1

3. MongoDB version


mongos> db.version()
4.0.4-62-g7e345a7

2. Basic information preparation

0. System optimization


echo "never" >/sys/kernel/mm/transparent_hugepage/enabled
echo "never" >/sys/kernel/mm/transparent_hugepage/defrag

1. Download MongoDB2 base file


cd /chj/app
wget ops.chehejia.com:9090/pkg/chj_mongodb_4.0.4.tar.gz
tar -zxvf chj_mongodb_4.0.4.tar.gz

2. Establishment of relevant directories


# To establish base directory 
mkdir /chj/data/mongodb/chj_db
# the MongoDB2 Base file moved to base In the directory bin folder 
mv chj_mongodb_4.0.4/bin /chj/data/mongodb/chj_db/bin
# Create a directory of authentication files 
mkdir /chj/data/mongodb/chj_db/auth
# Create a profile directory 
mkdir /chj/data/mongodb/chj_db/conf
# To establish config server the data And log directories 
mkdir /chj/data/mongodb/chj_db/config/data -p
mkdir /chj/data/mongodb/chj_db/config/log
# To establish mongos Log directory 
mkdir /chj/data/mongodb/chj_db/mongos/log -p
# Establishing data nodes data And log directories  
mkdir /chj/data/mongodb/chj_db/shard1/data -p
mkdir /chj/data/mongodb/chj_db/shard1/log

3. Compile relevant configuration files

A, mongos configuration file preparation


vim /chj/data/mongodb/chj_db/conf/mongos.conf
systemLog:
 destination: file
 logAppend: true
 path: /chj/data/mongodb/chj_db/mongos/log/mongos.log

processManagement:
 fork: true # fork and run in background
 pidFilePath: /chj/data/mongodb/chj_db/mongos/log/mongos.pid # location of pidfile
 timeZoneInfo: /usr/share/zoneinfo

net:
 port: 27000
 bindIpAll: true
 maxIncomingConnections: 1000
 unixDomainSocket:
  enabled: true
  pathPrefix: /chj/data/mongodb/chj_db/mongos/log
  filePermissions: 0700

security:
 keyFile: /chj/data/mongodb/chj_db/auth/keyfile.key
# authorization: enabled

#replication:

sharding:
 configDB: repl_configsvr/172.21.244.101:27100,172.21.244.102:27100,172.21.244.94:27100

B, config server configuration file preparation


vim /chj/data/mongodb/chj_db/conf/config.conf
systemLog:
 destination: file
 logAppend: true
 path: /chj/data/mongodb/chj_db/config/log/congigsrv.log

storage:
 dbPath: /chj/data/mongodb/chj_db/config/data
 journal:
  enabled: true
 wiredTiger:
  engineConfig:
   directoryForIndexes: true

processManagement:
 fork: true # fork and run in background
 pidFilePath: /chj/data/mongodb/chj_db/config/log/configsrv.pid # location of pidfile
 timeZoneInfo: /usr/share/zoneinfo

net:
 port: 27100
 bindIp: 0.0.0.0 # Enter 0.0.0.0,:: to bind to all IPv4 and IPv6 addresses or, alternatively, use the net.bindIpAll setting.
 #bindIpAll: true
 maxIncomingConnections: 1000
 unixDomainSocket:
  enabled: true
  pathPrefix: /chj/data/mongodb/chj_db/config/data
  filePermissions: 0700

security:
 keyFile: /chj/data/mongodb/chj_db/auth/keyfile.key
 authorization: enabled

replication:
 replSetName: repl_configsvr
sharding:
 clusterRole: configsvr

C, storage node configuration file preparation


vim /chj/data/mongodb/chj_db/conf/shard1.conf
systemLog:
 destination: file
 logAppend: true
 path: /chj/data/mongodb/chj_db/shard1/log/shard1.log

storage:
 dbPath: /chj/data/mongodb/chj_db/shard1/data
 journal:
  enabled: true
 wiredTiger:
  engineConfig:
   directoryForIndexes: true

processManagement:
 fork: true # fork and run in background
 pidFilePath: /chj/data/mongodb/chj_db/shard1/log/shard1.pid # location of pidfile
 timeZoneInfo: /usr/share/zoneinfo

net:
 port: 27101
 bindIp: 0.0.0.0 # Enter 0.0.0.0,:: to bind to all IPv4 and IPv6 addresses or, alternatively, use the net.bindIpAll setting.
 #bindIpAll: true
 maxIncomingConnections: 1000
 unixDomainSocket:
  enabled: true
  pathPrefix: /chj/data/mongodb/chj_db/shard1/data
  filePermissions: 0700

security:
 keyFile: /chj/data/mongodb/chj_db/auth/keyfile.key
 authorization: enabled

replication:
 replSetName: shard1
sharding:
 clusterRole: shardsvr

4. Produce key certification documents


echo "chj123456" >/chj/data/mongodb/chj_db/auth/keyfile.key
# Set the permissions of the file to 400 Or the service will not start 
chmod 400 /chj/data/mongodb/chj_db/auth/keyfile.key

3. Cluster initialization

1. Start config server service


/chj/data/mongodb/chj_db/bin/mongod -f /chj/data/mongodb/chj_db/conf/config.conf

2. Initialize the config server cluster


# Log in with 1 a config server node 
/chj/data/mongodb/chj_db/bin/mongo --port 27100
# Configure the cluster 
config = { _id:"repl_configsvr",members:[ {_id:0,host:"172.21.244.101:27100"}, {_id:1,host:"172.21.244.102:27100"}, {_id:2,host:"172.21.244.94:27100"}] }
# Initialize the cluster 
rs.initiate(config)
PS : 
 The result output is as follows, indicating that the cluster was successfully initialized and can pass rs.status() Command to view cluster status 
{
    "ok" : 1,
    "$gleStats" : {
        "lastOpTime" : Timestamp(1557538260, 1),
        "electionId" : ObjectId("000000000000000000000000")
    },
    "lastCommittedOpTime" : Timestamp(0, 0)
}

3. Start the storage node service


echo "never" >/sys/kernel/mm/transparent_hugepage/enabled
echo "never" >/sys/kernel/mm/transparent_hugepage/defrag
0

4. Initialize the storage cluster


echo "never" >/sys/kernel/mm/transparent_hugepage/enabled
echo "never" >/sys/kernel/mm/transparent_hugepage/defrag
1

5. Add the management account of the storage cluster

Login master node


echo "never" >/sys/kernel/mm/transparent_hugepage/enabled
echo "never" >/sys/kernel/mm/transparent_hugepage/defrag
2

6. Start mongos service


/chj/data/mongodb/chj_db/bin/mongos -f /chj/data/mongodb/chj_db/conf/mongos.conf

7. Add the management account of config server

Log in to any 1 mongos node


echo "never" >/sys/kernel/mm/transparent_hugepage/enabled
echo "never" >/sys/kernel/mm/transparent_hugepage/defrag
4

8. Add the storage node to mongos

Log in to any 1 mongos node (if you are in the previous step window, you need to log out and log in again)


echo "never" >/sys/kernel/mm/transparent_hugepage/enabled
echo "never" >/sys/kernel/mm/transparent_hugepage/defrag
5

4. Delivery to business Party

1. Set up an app account


echo "never" >/sys/kernel/mm/transparent_hugepage/enabled
echo "never" >/sys/kernel/mm/transparent_hugepage/defrag
6

2. Deliver developer information


echo "never" >/sys/kernel/mm/transparent_hugepage/enabled
echo "never" >/sys/kernel/mm/transparent_hugepage/defrag
7

5. Enable sharding of database

If there is a large amount of business in the later period, it is necessary to open the sharding. The configuration is as follows


# Specifies the database to shard 
mongos> sh.enableSharding("chj_db") 
{
    "ok" : 1,
    "operationTime" : Timestamp(1557546835, 3),
    "$clusterTime" : {
        "clusterTime" : Timestamp(1557546835, 3),
        "signature" : {
            "hash" : BinData(0,"bkrrr8Kxrr9j9udrDc/hURHld38="),
            "keyId" : NumberLong("6689575940508352541")
        }
    }
}
# in chj_db The database and users Created in the collection name and age Is the ascending slice key 
mongos> sh.shardCollection("chj_db.users",{name:1,age:1}) 
{
    "collectionsharded" : "chj_db.users",
    "collectionUUID" : UUID("59c0b99f-efff-4132-b489-f6c7e3d98f42"),
    "ok" : 1,
    "operationTime" : Timestamp(1557546861, 12),
    "$clusterTime" : {
        "clusterTime" : Timestamp(1557546861, 12),
        "signature" : {
            "hash" : BinData(0,"UBB1A/YODnmXwG5eAhgNLcKVzug="),
            "keyId" : NumberLong("6689575940508352541")
        }
    }
}
# View sharding 
mongos> sh.status() 
--- Sharding Status ---
 sharding version: {
    "_id" : 1,
    "minCompatibleVersion" : 5,
    "currentVersion" : 6,
    "clusterId" : ObjectId("5cd625e0da695346d740f749")
 }
 shards:
    { "_id" : "shard1", "host" : "shard1/172.21.244.101:27101,172.21.244.102:27101", "state" : 1 }
 active mongoses:
    "4.0.4-62-g7e345a7" : 3
 autosplit:
    Currently enabled: yes
 balancer:
    Currently enabled: yes
    Currently running: no
    Failed balancer rounds in last 5 attempts: 0
    Migration Results for the last 24 hours:
        No recent migrations
 databases:
    { "_id" : "chj_db", "primary" : "shard1", "partitioned" : true, "version" : { "uuid" : UUID("82088bc7-7b98-4033-843d-7058d8d959f6"), "lastMod" : 1 } }
        chj_db.users
            shard key: { "name" : 1, "age" : 1 }
            unique: false
            balancing: true
            chunks:
                shard1 1
            { "name" : { "$minKey" : 1 }, "age" : { "$minKey" : 1 } } -->> { "name" : { "$maxKey" : 1 }, "age" : { "$maxKey" : 1 } } on : shard1 Timestamp(1, 0)
    { "_id" : "config", "primary" : "config", "partitioned" : true }
        config.system.sessions
            shard key: { "_id" : 1 }
            unique: false
            balancing: true
            chunks:
                shard1 1
            { "_id" : { "$minKey" : 1 } } -->> { "_id" : { "$maxKey" : 1 } } on : shard1 Timestamp(1, 0)

Related articles: