You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestDecommissioningStatus.java
+10Lines changed: 10 additions & 0 deletions
Original file line number
Diff line number
Diff line change
@@ -323,6 +323,16 @@ public void testDecommissionStatus() throws Exception {
323
323
AdminStatesBaseTest.cleanupFile(fileSys, file2);
324
324
}
325
325
326
+
// Why do we verify initial state of DataNodes here?
327
+
// Before we start actual decommission testing, we should ensure that
328
+
// total 8 blocks (original 4 blocks of 2 files and 4 replicas) are
329
+
// present over two Datanodes available. If we don't wait until all 8 blocks
330
+
// are reported live by BlockManager, we might get to a situation
331
+
// where one of the replicas might not yet been present on any of Datanodes
332
+
// and we start decommissioning process, and then it would result in
333
+
// flaky test because total (no of under replicated blocks, no of outOfService
334
+
// only replicas, no of under replicated in open files) counts would be
0 commit comments