mongodb cluster reconfiguration and free disk space example detail
- 2020-06-07 05:29:23
- OfStack
MongoDB cluster refactoring, freeing up disk space
Since mongodb does not recycle disk space after deleting 1 section of data, it frees disk space by rebuilding the data directory.
1 Experimental Environment
A replica set is configured that consists of the following three nodes:
10.192.203.201:27017 PRIMARY
10.192.203.202:27017 SECONDARY
10.192.203.202:10001 ARBITER
2 Experimental Steps
2.1 Simulation environment
use dba;
for(var i=0;i<1000000;i++)db.c.insert({uid:i,uname:'osqlfan'+i});
db.c.find().count();#1000000
db.stats();
{
"db" : "dba",
"collections" : 5,
"objects" : 1000111,
"avgObjSize" : 111.9994880568257,
"dataSize" : 112011920,
"storageSize" : 174796800,
"numExtents" : 17,
"indexes" : 3,
"indexSize" : 32475072,
"fileSize" : 469762048,
"nsSizeMB" : 16,
"extentFreeList" : {
"num" : 0,
"totalSize" : 0
},
"dataFileVersion" : {
"major" : 4,
"minor" : 22
},
"ok" : 1
}
Disk space increased by 400M data:
-rw-------. 1 root root 134217728 Nov 7 13:38 dba.1
-rw-------. 1 root root 268435456 Nov 7 13:38 dba.2
[root@slave2 ~]# du -sh /data/mongo/data
4.7G /data/mongo/data
# delete dba.c Table data :
MyReplset:PRIMARY> db.c.drop();
true
MyReplset:PRIMARY> db.c.find().count();
0
MyReplset:PRIMARY> db.stats();
{
"db" : "dba",
"collections" : 4,
"objects" : 108,
"avgObjSize" : 108.44444444444444,
"dataSize" : 11712,
"storageSize" : 61440,
"numExtents" : 5,
"indexes" : 2,
"indexSize" : 16352,
"fileSize" : 469762048,
"nsSizeMB" : 16,
"extentFreeList" : {
"num" : 18,
"totalSize" : 212492288
},
"dataFileVersion" : {
"major" : 4,
"minor" : 22
},
"ok" : 1
}
See that dataSize, indexSize, storageSize are smaller, but fileSize has not changed, and the mongo data directory still occupies 4.7G.
2.2 Make sure to refactor on slave library 10.192.203.2020:27017 first
View master-slave relationships
MyReplset:PRIMARY>rs.status();
{
"set" : "MyReplset",
"date" :ISODate("2016-11-07T07:10:50.717Z"),
"myState" : 1,
"members" : [
{
"_id" : 0,
"name" :"10.192.203.201:27017",
"health" : 1,
"state" : 1,
"stateStr" :"PRIMARY",
"uptime" : 964,
"optime" :Timestamp(1478239977, 594),
"optimeDate" :ISODate("2016-11-04T06:12:57Z"),
"electionTime" :Timestamp(1478502021, 1),
"electionDate" :ISODate("2016-11-07T07:00:21Z"),
"configVersion" :2,
"self" : true
},
{
"_id" : 1,
"name" :"10.192.203.202:27017",
"health" : 1,
"state" : 2,
"stateStr" :"SECONDARY",
"uptime" : 628,
"optime" :Timestamp(1478239977, 594),
"optimeDate" :ISODate("2016-11-04T06:12:57Z"),
"lastHeartbeat" :ISODate("2016-11-07T07:10:49.257Z"),
"lastHeartbeatRecv": ISODate("2016-11-07T07:10:50.143Z"),
"pingMs" : 2,
"configVersion" :2
},
{
"_id" : 2,
"name" :"10.192.203.202:10001",
"health" : 1,
"state": 7,
"stateStr" :"ARBITER",
"uptime" : 618,
"lastHeartbeat" :ISODate("2016-11-07T07:10:49.416Z"),
"lastHeartbeatRecv": ISODate("2016-11-07T07:10:49.847Z"),
"pingMs" : 2,
"configVersion" :2
}
],
"ok" : 1
}
2.2.1 Close the database
MyReplset:SECONDARY> use admin;
switched to db admin
MyReplset:SECONDARY> db.shutdownServer();
2016-11-07T15:14:42.548+0800 I NETWORK DBClientCursor::init call() failed
server should be down...
2016-11-07T15:14:42.571+0800 I NETWORK trying reconnect to 127.0.0.1:27017(127.0.0.1) failed
2016-11-07T15:14:42.575+0800 W NETWORK Failed to connect to 127.0.0.1:27017, reason:errno:111 Connection refused
2016-11-07T15:14:42.575+0800 I NETWORK reconnect 127.0.0.1:27017 (127.0.0.1) failedfailed couldn't connect to server 127.0.0.1:27017 (127.0.0.1), connectionattempt failed
2016-11-07T15:14:42.634+0800 I NETWORK trying reconnect to 127.0.0.1:27017(127.0.0.1) failed
2016-11-07T15:14:42.637+0800 W NETWORK Failed to connect to 127.0.0.1:27017, reason:errno:111 Connection refused
2016-11-07T15:14:42.638+0800I NETWORK reconnect 127.0.0.1:27017(127.0.0.1) failed failed couldn't connect to server 127.0.0.1:27017(127.0.0.1), connection attempt failed
2.2.2 Backup, delete and rebuild the data directory
Back up the 10.192.203.202:27017 data directory, omitted here
After the backup is complete, delete and rebuild the directory.
rm-rf /data/mongo/data
mkdir/data/mongo/data
2.2.3 Start the database
Start process 10.192.203.202:27017:
/usr/local/mongodb/bin/mongod--config /usr/local/mongodb/mongod.cnf --replSet MyReplset -rest
2.2.4 check
Check that the database is healthy and that previous databases exist.
Check to see if disk space is reduced.
Upon inspection, the space was reduced to 4.3G and 400MB.
2.3 Reconstruct the main library
2.3.1 Switch the master-slave relationship
Since 201 is the master, the master-slave relationship between 201 and 202:27017 needs to be switched under 1. In this experiment, there is only one slave node except the arbitration node. If there are more than one node, it needs to be on the remaining slave nodes
Execution: rs freeze (300); (Lock from so that it does not convert to the main library)
On 10.192.203.201:27017 execute: ES104en.ES105en (30); (Downgrade it)
Both freeze() and stepDown are in seconds.
rs.status() checks to see if the master-slave relationship has been switched over.
2.3.2 Close the database
Stop process 10.192.203.201:27017:
MyReplset:SECONDARY > use admin;
switched to db admin
MyReplset:SECONDARY > db.shutdownServer();
2.3.3 Backup and delete, and rebuild its data directory
Back up a little.
rm-rf /data/mongo/data
mkdir/data/mongo/data
2.3.4 Start the database
Start process 10.192.203.201:27017:
/usr/local/mongodb/bin/mongod--config /usr/local/mongodb/mongod.cnf --replSet MyReplset -rest
2.3.4 check
Check that the database is healthy and that previous databases exist.
Check to see if disk space is reduced.
Upon inspection, the space was reduced to 4.3G and 400MB.
The mediation node does not require refactoring.
After refactoring, you can switch back to the original master-slave state.
Thank you for reading, I hope to help you, thank you for your support to this site!