Project

General

Profile

Actions

Bug #570

closed

Help with Open Solaris ZFS disk replacement

Added by Michelle Knight over 11 years ago. Updated over 11 years ago.

Status:
Rejected
Priority:
Low
Assignee:
-
Target version:
-
Start date:
2010-12-23
Due date:
% Done:

100%

Estimated time:
Difficulty:
Tags:

Description

Hi Folks,

Apologies to be asking this question here, I'm not even sure if this is the right place to ask it ... but I'm an old Open Solaris user ... well, old as in the last build before Oracle took the helm and their Solaris forum doesn't work properly, as it gives a 500 error when I try to log in ... but hey, read in to that failure what you will! I'll likely transfer to Illumos when it comes time to replace the server, but you know the old saying, if it ain't broke...

Long story short is that one drive in a ZFS array was totalling a few too many checksum errors for my liking so I replaced it. After the replacement I did a scrub. It didn't get far before it clocked up over 300 errors on the new drive and degraded the set, so I've had to revert back again ... very quickly.

The prime question is this ... is there any way to test a drive for errors/status/health before trusting data to it?

Secondary question ... the raidz set was fully available while the replace was in progress. I didn't see any harm in writing to the set while the replace was running (it took the best part of the day to resilver over a terrabyte) - could my writing to the set while the replace was running, have had an effect on the replacement drives massive error rate?

Actions #1

Updated by Michelle Knight over 11 years ago

  • % Done changed from 0 to 100

Oops. I've hit a wrong button somwhere and ended up filing a bug report.

Please accept my sincere apologies and close this report. I don't seem to be able to retract it.

Actions #2

Updated by Garrett D'Amore over 11 years ago

  • Status changed from New to Rejected

Not a bug.

Actions

Also available in: Atom PDF