-
Notifications
You must be signed in to change notification settings - Fork 11
/
week-3.txt
327 lines (279 loc) · 14.8 KB
/
week-3.txt
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
DISASTER RECOVERY AND BACKUP:
Quiz:
1. In a year, what is the maximum number of days of downtime you can have and still claim >99% uptime? [3] - 4 days downtime would be 98.9% availability.
2. What are the key criteria for deciding your disaster recovery requirements?
Tolerance for data loss[T]
Tolerance for downtime[T]
Tolerance for reduced capacity[T]
Tolerance for CPU-intensive queries
Tolerance to iocane powder
3. Which of the following systems might have a low tolerance for both data loss and downtime?
Cache in front of the system of record
Data storage for a system that need only deliver information once a month
Online ad servers[T]
A system with several redundant servers and low concern for consistency
A system using MongoDB only as an in-memory cache in front of a relational database
4. What conclusions regarding downtime and data loss should we draw in considering 2-node replica sets vs. single node deployments in MongoDB? Check all that apply.
Chances for downtime are increased[T] - As they won't provide an HA and you have onemore node to go down.
Chances for downtime are decreased
Chances for downtime are unaffected
Chances for data loss are increased
Chances for data loss are decreased[T]
Chances for data loss are unaffected
5. Which of the following systems might have a low tolerance for data loss, but a high tolerance for downtime?
The navigation system used by automobiles manufactured by Ford
A courseware system used during the teaching day at an elementary school[T]
The blogging system used by the London Times
A retail banking transaction system
Gmail
6. If you have 5 data centers across which your sharded cluster is distributed, how many data centers will be without a config server? - 2
7. With regard to backing up your data, which of the following questions should you take time to consider?
Do I need to do backups?
How quickly can I restore?[T]
What complications are distributed systems introducing?[T]
Is my test strategy catching errors in backups?[T]
8. What impacts might a snapshot based backup strategy have on a live system? Check all that apply.
Slower operations[T]
You will have to shut your system down to do it
Slower disk I/O[T]
It's not free[T]
There are no obvious impacts
9. When is it appropriate to use mongodump for backup?
For small data sets[T]
For config servers[T]
In large production systems - It may take long time for dumping and restoring. It will have significant impact on your system.
10. True or False: You can prevent certain collections from being backed up in MMS? - True
You need to install a seperate backup agent on your server for this to work. This agent is different from MMS agent. *UGH!* - MMS backup is *not free*. Two factor authentication is used for taking backups and the backup data will be stores on MMS hosts.
The pricing is based on the amount of space you use for backups.
===============================================================================================================
H-w 3.1:
Primary:
mongod --port 30001 --dbpath mongod-pri --replSet CorruptionTest --smallfiles --oplogSize 128
Secondary
mongod --port 30002 --dbpath mongod-sec --replSet CorruptionTest --smallfiles --oplogSize 128
Arbiter
mongod --port 30003 --dbpath mongod-arb --replSet CorruptionTest
- Checking which is P/S/A:
> mongo --port 30002
MongoDB shell version: 2.6.1
connecting to: 127.0.0.1:30002/test
CorruptionTest:SECONDARY>
CorruptionTest:SECONDARY>
CorruptionTest:SECONDARY> rs.status()
{
"set" : "CorruptionTest",
"date" : ISODate("2014-05-16T16:54:40Z"),
"myState" : 2,
"syncingTo" : "127.0.0.1:30001",
"members" : [
{
"_id" : 0,
"name" : "127.0.0.1:30001",
"health" : 1,
"state" : 1,
"stateStr" : "PRIMARY",
"uptime" : 229,
"optime" : Timestamp(1399499925, 7077),
"optimeDate" : ISODate("2014-05-07T21:58:45Z"),
"lastHeartbeat" : ISODate("2014-05-16T16:54:39Z"),
"lastHeartbeatRecv" : ISODate("2014-05-16T16:54:40Z"),
"pingMs" : 0,
"electionTime" : Timestamp(1400259057, 1),
"electionDate" : ISODate("2014-05-16T16:50:57Z")
},
{
"_id" : 1,
"name" : "127.0.0.1:30002",
"health" : 1,
"state" : 2,
"stateStr" : "SECONDARY",
"uptime" : 246,
"optime" : Timestamp(1399499925, 7077),
"optimeDate" : ISODate("2014-05-07T21:58:45Z"),
"self" : true
},
{
"_id" : 2,
"name" : "127.0.0.1:30003",
"health" : 1,
"state" : 7,
"stateStr" : "ARBITER",
"uptime" : 167,
"lastHeartbeat" : ISODate("2014-05-16T16:54:39Z"),
"lastHeartbeatRecv" : ISODate("2014-05-16T16:54:39Z"),
"pingMs" : 0
}
],
"ok" : 1
}
CorruptionTest:SECONDARY> show collections
2014-05-16T22:26:02.930+0530 error: { "$err" : "not master and slaveOk=false", "code" : 13435 } at src/mongo/shell/query.js:131
CorruptionTest:SECONDARY> show dbs
admin (empty)
local 0.281GB
test (empty)
testDB 0.125GB
CorruptionTest:SECONDARY> use testDB
switched to db testDB
CorruptionTest:SECONDARY> show collections
2014-05-16T22:26:20.831+0530 error: { "$err" : "not master and slaveOk=false", "code" : 13435 } at src/mongo/shell/query.js:131
CorruptionTest:SECONDARY> rs.slaveOk()
CorruptionTest:SECONDARY> show collections
system.indexes
testColl
CorruptionTest:SECONDARY> db.testColl.findOne()
{
"_id" : 0,
"string" : "testStringForPadding0000000000000000000000000000000000000000",
"otherID" : ObjectId("536aac8ef7f5f8e9b4c240ea")
}
CorruptionTest:SECONDARY> db.testColl.find().explain()
2014-05-16T22:27:21.961+0530 error: {
"$err" : "BSONObj size: 1685417573 (0x64756E65) is invalid. Size must be between 0 and 16793600(16MB) First element: //drrdu/dvad\u0002string: ?type=111",
"code" : 10334
} at src/mongo/shell/query.js:131
Is shows some error.
- Stop the DB using the following command in secondary shell:
CorruptionTest:SECONDARY> db.shutdownServer()
shutdown command only works with the admin database; try 'use admin'
CorruptionTest:SECONDARY> use admin
switched to db admin
CorruptionTest:SECONDARY> db.shutdownServer()
2014-05-16T22:35:21.659+0530 DBClientCursor::init call() failed
server should be down...
2014-05-16T22:35:21.662+0530 trying reconnect to 127.0.0.1:30002 (127.0.0.1) failed
2014-05-16T22:35:21.663+0530 warning: Failed to connect to 127.0.0.1:30002, reason: errno:111 Connection refused
2014-05-16T22:35:21.663+0530 reconnect 127.0.0.1:30002 (127.0.0.1) failed failed couldn't connect to server 127.0.0.1:30002 (127.0.0.1), connection attempt failed
2014-05-16T22:35:21.779+0530 trying reconnect to 127.0.0.1:30002 (127.0.0.1) failed
2014-05-16T22:35:21.779+0530 warning: Failed to connect to 127.0.0.1:30002, reason: errno:111 Connection refused
2014-05-16T22:35:21.779+0530 reconnect 127.0.0.1:30002 (127.0.0.1) failed failed couldn't connect to server 127.0.0.1:30002 (127.0.0.1), connection attempt failed
- Delete all the files which are under the mongod-sec:
➜ ~ cd mongod-sec/
➜ ~ ls
journal local.0 local.1 local.ns mongod.lock testDB.0 testDB.1 testDB.2 testDB.ns
ravitezu@terminator:~/Downloads/mongod-sec$ rm -rf *
- Start the host:
➜ ~ mongod --port 30002 --dbpath mongod-sec --replSet CorruptionTest --smallfiles --oplogSize 128
- From above logs: It has recovered, it means it has pulled the data from Primary node.
- Use the MongProc.
===============================================================================================================
H-w 3.2:
- Before Friday
As every node can handle a total of 20,000 connections, but only 90% of 20,000 connections are allowed.
Each node should only be used for 18,000 connections.
===============================================================================================================
H-w 3.3:
- The application will still be able to read data.
- The primary in DC2 will step itself down when the partition occurs.
- The two secondaries will hold an election when the partition occurs.
- The application will still be able to write while the partition is up.
===============================================================================================================
H-w 3.4:
- The db.collection.drop() command will still be in the oplog, and will need to be avoided.
- Ensure that any writes that occurred after the drop() command are not replayed (you might need to deal with these later), because these might lead to unexpected or inconsistent results.
===============================================================================================================
H-w 3.5:
- mongod --config mongod.conf -> This starts a mongod replica instance with the backuptest as db path and on port 30001.
- Now connect to it and get the offend operation timestamp:
➜ homework_3_5 mongo --port 30001
MongoDB shell version: 2.6.1
connecting to: 127.0.0.1:30001/test
Server has startup warnings:
2014-05-18T13:34:56.227+0200 [initandlisten]
2014-05-18T13:34:56.227+0200 [initandlisten] ** NOTE: This is a 32 bit MongoDB binary.
2014-05-18T13:34:56.227+0200 [initandlisten] ** 32 bit builds are limited to less than 2GB of data (or less with --journal).
2014-05-18T13:34:56.227+0200 [initandlisten] ** Note that journaling defaults to off for 32 bit and is currently off.
2014-05-18T13:34:56.227+0200 [initandlisten] ** See http://dochub.mongodb.org/core/32bit
2014-05-18T13:34:56.227+0200 [initandlisten]
BackupTest:PRIMARY> show dbs
admin (empty)
backupDB 0.250GB
local 0.281GB
BackupTest:PRIMARY> use local
switched to db local
BackupTest:PRIMARY> show collections
me
oplog.rs
startup_log
system.indexes
system.replset
BackupTest:PRIMARY> db.oplog.rs.find({"op":"c"})
{ "ts" : Timestamp(1398778745, 1), "h" : NumberLong("-4262957146204779874"), "v" : 2, "op" : "c", "ns" : "backupDB.$cmd", "o" : { "drop" : "backupColl" } }
The ts is 1398778745:1
➜ homework_3_5 mongodump --port 30001 -d local -c oplog.rs -o oplogD
connected to: 127.0.0.1:30001
2014-05-18T13:37:30.556+0200 DATABASE: local to oplogD/local
2014-05-18T13:37:30.585+0200 local.oplog.rs to oplogD/local/oplog.rs.bson
2014-05-18T13:37:31.942+0200 701202 documents
2014-05-18T13:37:31.942+0200 Metadata for local.oplog.rs to oplogD/local/oplog.rs.metadata.json
➜ homework_3_5 mkdir oplogR
➜ homework_3_5 mv oplogD/local/oplog.rs.bson oplogR/oplog.bson
- Now start a new standalone(not replicaset) and do a mongorestore.
➜ homework_3_5 mongorestore --journal backupDB
connected to: 127.0.0.1
2014-05-18T13:41:47.599+0200 backupDB/backupColl.bson
2014-05-18T13:41:47.599+0200 going into namespace [backupDB.backupColl]
2014-05-18T13:41:50.242+0200 Progress: 6988800/55036800 12% (bytes)
2014-05-18T13:41:53.036+0200 Progress: 12448800/55036800 22% (bytes)
2014-05-18T13:41:56.468+0200 Progress: 19046300/55036800 34% (bytes)
2014-05-18T13:41:59.304+0200 Progress: 24497200/55036800 44% (bytes)
2014-05-18T13:42:02.006+0200 Progress: 29875300/55036800 54% (bytes)
2014-05-18T13:42:05.160+0200 Progress: 34871200/55036800 63% (bytes)
2014-05-18T13:42:08.003+0200 Progress: 40404000/55036800 73% (bytes)
2014-05-18T13:42:11.515+0200 Progress: 47065200/55036800 85% (bytes)
2014-05-18T13:42:14.014+0200 Progress: 51706200/55036800 93% (bytes)
604800 objects found
2014-05-18T13:42:15.729+0200 Creating index: { key: { _id: 1 }, name: "_id_", ns: "backupDB.backupColl" }
Then, replay the oplog on this standalone node: mongorestore --oplogReplay --oplogLimit 1398778745:1 oplogR
➜ homework_3_5 mongorestore --oplogReplay --oplogLimit 1398778745:1 oplogR
connected to: 127.0.0.1
2014-05-18T13:42:37.118+0200 Replaying oplog
2014-05-18T13:42:40.014+0200 Progress: 2371319/113990182 2% (bytes)
2014-05-18T13:42:43.018+0200 Progress: 4826219/113990182 4% (bytes)
2014-05-18T13:42:46.016+0200 Progress: 7281119/113990182 6% (bytes)
2014-05-18T13:42:49.015+0200 Progress: 9786119/113990182 8% (bytes)
2014-05-18T13:42:52.007+0200 Progress: 12224319/113990182 10% (bytes)
2014-05-18T13:42:55.016+0200 Progress: 14679219/113990182 12% (bytes)
2014-05-18T13:42:58.019+0200 Progress: 17150819/113990182 15% (bytes)
2014-05-18T13:43:01.021+0200 Progress: 19622419/113990182 17% (bytes)
2014-05-18T13:43:04.023+0200 Progress: 22094019/113990182 19% (bytes)
2014-05-18T13:43:07.011+0200 Progress: 24548919/113990182 21% (bytes)
2014-05-18T13:43:10.015+0200 Progress: 27053919/113990182 23% (bytes)
2014-05-18T13:43:13.024+0200 Progress: 29558919/113990182 25% (bytes)
2014-05-18T13:43:16.026+0200 Progress: 32063919/113990182 28% (bytes)
2014-05-18T13:43:19.015+0200 Progress: 34518819/113990182 30% (bytes)
2014-05-18T13:43:22.013+0200 Progress: 37157419/113990182 32% (bytes)
2014-05-18T13:43:25.009+0200 Progress: 39729219/113990182 34% (bytes)
2014-05-18T13:43:28.016+0200 Progress: 42367819/113990182 37% (bytes)
2014-05-18T13:43:31.014+0200 Progress: 45190119/113990182 39% (bytes)
2014-05-18T13:43:34.024+0200 Progress: 47711819/113990182 41% (bytes)
2014-05-18T13:43:37.020+0200 Progress: 50266919/113990182 44% (bytes)
2014-05-18T13:43:40.023+0200 Progress: 52838719/113990182 46% (bytes)
2014-05-18T13:43:43.017+0200 Progress: 55560819/113990182 48% (bytes)
2014-05-18T13:43:46.020+0200 Progress: 58032419/113990182 50% (bytes)
2014-05-18T13:43:49.021+0200 Progress: 60537419/113990182 53% (bytes)
2014-05-18T13:43:52.019+0200 Progress: 63059119/113990182 55% (bytes)
2014-05-18T13:43:55.009+0200 Progress: 65547419/113990182 57% (bytes)
2014-05-18T13:43:58.019+0200 Progress: 68186019/113990182 59% (bytes)
2014-05-18T13:44:01.022+0200 Progress: 70774519/113990182 62% (bytes)
2014-05-18T13:44:04.007+0200 Progress: 73262819/113990182 64% (bytes)
2014-05-18T13:44:07.024+0200 Progress: 75801219/113990182 66% (bytes)
2014-05-18T13:44:10.017+0200 Progress: 78473219/113990182 68% (bytes)
2014-05-18T13:44:13.024+0200 Progress: 81078419/113990182 71% (bytes)
2014-05-18T13:44:16.014+0200 Progress: 83633519/113990182 73% (bytes)
2014-05-18T13:44:19.024+0200 Progress: 86171919/113990182 75% (bytes)
2014-05-18T13:44:22.012+0200 Progress: 88643519/113990182 77% (bytes)
2014-05-18T13:44:25.017+0200 Progress: 91115119/113990182 79% (bytes)
2014-05-18T13:44:28.009+0200 Progress: 93620119/113990182 82% (bytes)
2014-05-18T13:44:31.014+0200 Progress: 96125119/113990182 84% (bytes)
2014-05-18T13:44:34.010+0200 Progress: 98713619/113990182 86% (bytes)
2014-05-18T13:44:37.012+0200 Progress: 101171855/113990182 88% (bytes)
2014-05-18T13:44:40.010+0200 Progress: 103228555/113990182 90% (bytes)
2014-05-18T13:44:43.017+0200 Progress: 105219755/113990182 92% (bytes)
2014-05-18T13:44:46.021+0200 Progress: 107237155/113990182 94% (bytes)
2014-05-18T13:44:49.024+0200 Progress: 109280755/113990182 95% (bytes)
2014-05-18T13:44:52.009+0200 Progress: 111298155/113990182 97% (bytes)
2014-05-18T13:44:55.011+0200 Progress: 113589119/113990182 99% (bytes)
701202 objects found
2014-05-18T13:44:55.489+0200 Applied 701200 oplog entries out of 701201 (1 skipped).
Finally, run the MongoProc against this node or make this standalone run on 30001 as the MongoProc checks on this port for validating your answer.