Blowaway full receive in v1 pool causes kernel panic
Steps to reproduce:
1. Create v1 pool "test".
2. Create filesystem "test/fs"
3. Create filesystem "test/fs2"
4. Add an empty file to "test/fs"
5. snapshot "test/fs@A"
6. zfs send test/fs@A | zfs receive
F test/fs2@A2>ds_phys->ds_prev_snap_obj", which is $ORIGIN in a new pool, but on an old pool it's 0.
Kernel panics during the receive above. Stack trace is as follows:
ffffff0007d97920 dsl_dataset_rele+0x10(0, fffffffff7a5ea30)
ffffff0007d979c0 dmu_recv_begin_sync+0xc5(ffffff0009a0f960, ffffff01dd0ca100)
ffffff0007d97a00 dsl_sync_task_sync+0x10a(ffffff0009a0f870, ffffff01dd0ca100)
ffffff0007d97a90 dsl_pool_sync+0x285(ffffff01fa7140c0, b)
ffffff0007d97b70 spa_sync+0x3b1(ffffff022a989000, b)
Looking at this, it looks like the cause is that dmu_recv_begin_sync's first argument has drba_snapobj == 0, and the target of the receive exists. In that case, the dsl_dataset_rele gets passed NULL as its first argument every time. drba_snapobj could be 0 during a full send on an old pool, because the code sets the snapobj to "ds
Updated by Electric Monk almost 5 years ago
- % Done changed from 0 to 100
- Status changed from New to Closed
commit f40b29ce2a815bcc0787acf6f520a2b74258b785 Author: Paul Dagnelie <email@example.com> Date: 2015-04-26T22:26:13.000Z 5809 Blowaway full receive in v1 pool causes kernel panic Reviewed by: Matthew Ahrens <firstname.lastname@example.org> Reviewed by: Alex Reece <email@example.com> Reviewed by: Will Andrews <firstname.lastname@example.org> Approved by: Gordon Ross <email@example.com>