Project

General

Profile

Bug #9318 » new_test_failures.md

Mike Gerdts, 2019-06-21 09:36 PM

 
1
There were some new test failures.
2

    
3
```
4
$ grep '>.*' test-results.full.diff | grep -v PASS
5
> [FAIL] /opt/zfs-tests/tests/functional/cli_root/zpool/zpool_002_pos
6
> [FAIL] /opt/zfs-tests/tests/functional/cli_root/zpool_clear/zpool_clear_001_pos
7
> [FAIL] /opt/zfs-tests/tests/functional/cli_root/zpool_destroy/zpool_destroy_001_pos
8
> [FAIL] /opt/zfs-tests/tests/functional/cli_root/zpool_remove/setup
9
> [SKIP] /opt/zfs-tests/tests/functional/cli_root/zpool_remove/zpool_remove_001_neg
10
> [SKIP] /opt/zfs-tests/tests/functional/cli_root/zpool_remove/zpool_remove_002_pos
11
> [SKIP] /opt/zfs-tests/tests/functional/cli_root/zpool_remove/zpool_remove_003_pos
12
> [KILLED] /opt/zfs-tests/tests/functional/mmp/mmp_on_off
13
> [FAIL] /opt/zfs-tests/tests/functional/redundancy/redundancy_003_pos
14
```
15

    
16
After the zfstest run was complete, I realized that this time around I ran with
17
4Kn disks whereas the baseline was with 512n disks - each disk is 1 GiB.  This
18
contributes to ENOSPC problems.  None of these failures are related to zvol
19
reservations.
20

    
21
## opt/zfs-tests/tests/functional/cli_root/zpool/zpool_002_pos
22

    
23
```
24
Test: /opt/zfs-tests/tests/functional/cli_root/zpool/zpool_002_pos (run as root) [01:21] [FAIL]
25
13:13:09.32 ASSERTION: With ZFS_ABORT set, all zpool commands can abort and generate a core file.
26
13:14:30.12 /var/tmp/testdir/file3: initialized 263192576 of 268435456 bytes: No space left on device
27
13:14:30.13 /opt/zfs-tests/tests/functional/cli_root/zpool/zpool_002_pos: line 91: 28060: Abort
28
13:14:30.14 NOTE: Performing test-fail callback (/opt/zfs-tests/callbacks/zfs_dbgmsg)
29
```
30

    
31
## opt/zfs-tests/tests/functional/cli_root/zpool_clear/zpool_clear_001_pos
32

    
33
```
34
Test: /opt/zfs-tests/tests/functional/cli_root/zpool_clear/zpool_clear_001_pos (run as root) [01:24] [FAIL]
35
13:15:04.40 ASSERTION: Verify 'zpool clear' can clear errors of a storage pool.
36
13:15:11.48 SUCCESS: mkfile 268435456 /var/tmp/testdir/file.0
37
13:15:21.20 SUCCESS: mkfile 268435456 /var/tmp/testdir/file.1
38
13:16:28.24 /var/tmp/testdir/file.2: initialized 263847936 of 268435456 bytes: No space left on device
39
13:16:28.24 ERROR: mkfile 268435456 /var/tmp/testdir/file.2 exited 1
40
13:16:28.24 NOTE: Performing test-fail callback (/opt/zfs-tests/callbacks/zfs_dbgmsg)
41
```
42

    
43
## opt/zfs-tests/tests/functional/cli_root/zpool_destroy/zpool_destroy_001_pos
44

    
45
I suspect that this is a result of earlier tests (not part of the suite) that
46
ran `zpool create` with the whole-disk nodes causing the disks to get an EFI
47
label rather than an SMI label.  Regardless, I've touched nothing near this.
48

    
49
```
50
Test: /opt/zfs-tests/tests/functional/cli_root/zpool_destroy/zpool_destroy_001_pos (run as root) [00:47] [FAIL]
51
13:18:33.25 ASSERTION: 'zpool destroy <pool>' can destroy a specified pool.
52
13:19:00.70 label error: EFI Labels do not support overlapping partitions
53
13:19:00.70 Partition 8 overlaps partition 1.
54
13:19:00.70 Warning: error writing EFI.
55
13:19:00.70 Label failed.
56
13:19:00.71 NOTE: Performing test-fail callback (/opt/zfs-tests/callbacks/zfs_dbgmsg)
57
```
58

    
59
## opt/zfs-tests/tests/functional/cli_root/zpool_remove/setup
60
```
61
Test: /opt/zfs-tests/tests/functional/cli_root/zpool_remove/setup (run as root) [00:33] [FAIL]
62
13:23:44.08 label error: EFI Labels do not support overlapping partitions
63
13:23:44.08 Partition 8 overlaps partition 5.
64
13:23:44.08 Warning: error writing EFI.
65
13:23:44.08 Label failed.
66
13:23:44.08 NOTE: Performing test-fail callback (/opt/zfs-tests/callbacks/zfs_dbgmsg)
67
```
68

    
69
### opt/zfs-tests/tests/functional/cli_root/zpool_remove/zpool_remove_001_neg
70

    
71
Setup failed, see above.
72

    
73
### opt/zfs-tests/tests/functional/cli_root/zpool_remove/zpool_remove_002_pos
74

    
75
Setup failed, see above.
76

    
77
### opt/zfs-tests/tests/functional/cli_root/zpool_remove/zpool_remove_003_pos
78

    
79
Setup failed, see above.
80

    
81
## opt/zfs-tests/tests/functional/mmp/mmp_on_off
82

    
83
Manual intervention.  See [write up](#file-mmp_on_off_hang-md).
84

    
85
## opt/zfs-tests/tests/functional/redundancy/redundancy_003_pos
86

    
87
```
88
Test: /opt/zfs-tests/tests/functional/redundancy/redundancy_003_pos (run as root) [00:47] [FAIL]
89
03:26:17.57 ASSERTION: Verify mirrored pool can withstand N-1 devices are failing or missing.
90
03:26:17.58 SUCCESS: mkdir /var/tmp/basedir.10350
91
03:26:17.90 SUCCESS: mkfile 268435456 /var/tmp/basedir.10350/vdev0 /var/tmp/basedir.10350/vdev1 /var/tmp/basedir.10350/vdev2 /var/tmp/basedir.10350/vdev3
92
03:26:18.01 SUCCESS: zpool create -m /var/tmp/testdir testpool mirror /var/tmp/basedir.10350/vdev0 /var/tmp/basedir.10350/vdev1 /var/tmp/basedir.10350/vdev2 /var/tmp/basedir.10350/vdev3
93
03:26:18.01 NOTE: Filling up the filesystem ...
94
03:26:20.25 write failed (-1), good_writes = 36, error: No space left on device[28]
95
03:26:20.28 SUCCESS: eval du -a /var/tmp/testdir > /var/tmp/basedir.10350/pre-record-file.10350 2>&1
96
03:26:21.08 SUCCESS: sync
97
03:26:23.09 SUCCESS: sleep 2
98
03:26:26.22 SUCCESS: sleep 2
99
03:26:26.27 SUCCESS: eval du -a /var/tmp/testdir > /var/tmp/basedir.10350/pst-record-file.10350 2>&1
100
03:26:26.77 SUCCESS: is_data_valid testpool
101
03:26:26.80 SUCCESS: zpool clear testpool
102
03:26:26.83 SUCCESS: rm -f /var/tmp/basedir.10350/pst-record-file.10350
103
03:26:26.86 SUCCESS: eval du -a /var/tmp/testdir > /var/tmp/basedir.10350/pst-record-file.10350 2>&1
104
03:26:27.31 SUCCESS: clear_errors testpool
105
03:26:28.84 SUCCESS: sync
106
03:26:30.86 SUCCESS: sleep 2
107
03:26:33.20 SUCCESS: sleep 2
108
03:26:33.23 SUCCESS: rm -f /var/tmp/basedir.10350/pst-record-file.10350
109
03:26:33.25 SUCCESS: eval du -a /var/tmp/testdir > /var/tmp/basedir.10350/pst-record-file.10350 2>&1
110
03:26:33.72 SUCCESS: is_data_valid testpool
111
03:26:33.75 SUCCESS: zpool clear testpool
112
03:26:33.77 SUCCESS: rm -f /var/tmp/basedir.10350/pst-record-file.10350
113
03:26:33.79 SUCCESS: eval du -a /var/tmp/testdir > /var/tmp/basedir.10350/pst-record-file.10350 2>&1
114
03:26:34.25 SUCCESS: clear_errors testpool
115
03:26:35.85 SUCCESS: sync
116
03:26:37.87 SUCCESS: sleep 2
117
03:26:40.21 SUCCESS: sleep 2
118
03:26:40.23 SUCCESS: rm -f /var/tmp/basedir.10350/pst-record-file.10350
119
03:26:40.24 SUCCESS: eval du -a /var/tmp/testdir > /var/tmp/basedir.10350/pst-record-file.10350 2>&1
120
03:26:40.73 SUCCESS: is_data_valid testpool
121
03:26:40.77 SUCCESS: zpool clear testpool
122
03:26:40.79 SUCCESS: rm -f /var/tmp/basedir.10350/pst-record-file.10350
123
03:26:40.80 SUCCESS: eval du -a /var/tmp/testdir > /var/tmp/basedir.10350/pst-record-file.10350 2>&1
124
03:26:41.29 SUCCESS: clear_errors testpool
125
03:26:42.03 SUCCESS: sync
126
03:26:44.05 SUCCESS: sleep 2
127
03:26:46.39 SUCCESS: sleep 2
128
03:26:46.40 SUCCESS: rm -f /var/tmp/basedir.10350/pst-record-file.10350
129
03:26:46.42 SUCCESS: eval du -a /var/tmp/testdir > /var/tmp/basedir.10350/pst-record-file.10350 2>&1
130
03:26:46.88 SUCCESS: is_data_valid testpool
131
03:26:46.96 SUCCESS: mkfile 268435456 /var/tmp/basedir.10350/vdev0
132
03:26:47.12 SUCCESS: zpool replace -f testpool /var/tmp/basedir.10350/vdev0 /var/tmp/basedir.10350/vdev0
133
03:26:49.26 SUCCESS: sleep 2
134
03:26:49.31 SUCCESS: rm -f /var/tmp/basedir.10350/pst-record-file.10350
135
03:26:49.33 SUCCESS: eval du -a /var/tmp/testdir > /var/tmp/basedir.10350/pst-record-file.10350 2>&1
136
03:26:49.78 SUCCESS: recover_bad_missing_devs testpool 1
137
03:26:50.43 SUCCESS: sync
138
03:26:52.43 SUCCESS: sleep 2
139
03:26:54.63 SUCCESS: sleep 2
140
03:26:54.64 SUCCESS: rm -f /var/tmp/basedir.10350/pst-record-file.10350
141
03:26:54.65 SUCCESS: eval du -a /var/tmp/testdir > /var/tmp/basedir.10350/pst-record-file.10350 2>&1
142
03:26:55.12 SUCCESS: is_data_valid testpool
143
03:26:55.21 SUCCESS: mkfile 268435456 /var/tmp/basedir.10350/vdev1
144
03:26:55.35 SUCCESS: zpool replace -f testpool /var/tmp/basedir.10350/vdev1 /var/tmp/basedir.10350/vdev1
145
03:26:57.50 SUCCESS: sleep 2
146
03:26:57.62 SUCCESS: mkfile 268435456 /var/tmp/basedir.10350/vdev0
147
03:26:57.81 SUCCESS: zpool replace -f testpool /var/tmp/basedir.10350/vdev0 /var/tmp/basedir.10350/vdev0
148
03:26:58.03 SUCCESS: rm -f /var/tmp/basedir.10350/pst-record-file.10350
149
03:26:58.05 SUCCESS: eval du -a /var/tmp/testdir > /var/tmp/basedir.10350/pst-record-file.10350 2>&1
150
03:26:58.51 SUCCESS: recover_bad_missing_devs testpool 2
151
03:26:59.43 SUCCESS: sync
152
03:27:01.43 SUCCESS: sleep 2
153
03:27:03.65 SUCCESS: sleep 2
154
03:27:03.66 SUCCESS: rm -f /var/tmp/basedir.10350/pst-record-file.10350
155
03:27:03.68 SUCCESS: eval du -a /var/tmp/testdir > /var/tmp/basedir.10350/pst-record-file.10350 2>&1
156
03:27:04.13 SUCCESS: is_data_valid testpool
157
03:27:04.23 SUCCESS: mkfile 268435456 /var/tmp/basedir.10350/vdev2
158
03:27:04.55 SUCCESS: zpool replace -f testpool /var/tmp/basedir.10350/vdev2 /var/tmp/basedir.10350/vdev2
159
03:27:04.67 SUCCESS: mkfile 268435456 /var/tmp/basedir.10350/vdev1
160
03:27:04.69 ERROR: zpool replace -f testpool /var/tmp/basedir.10350/vdev1 /var/tmp/basedir.10350/vdev1 exited 1
161
03:27:04.69 invalid vdev specification the following errors must be manually repaired: /var/tmp/basedir.10350/vdev1 is part of active pool 'testpool'
162
03:27:04.69 NOTE: Performing test-fail callback (/opt/zfs-tests/callbacks/zfs_dbgmsg)
163
```
(8-8/9)