A good deal on 12TB Seagate IronWolf PRO drives if anyone is building a NAS

So, will ctinue the thought process however -
the renewed Tosh 10Tb drives are "direct" from Amazon US @ just over £100 / unit "Excellent" with 30 day return and a 12 month warranty so might still take the chance!🙏 That gives £21.22 / Tb for 30Tb's or £18.62 / Tb for 40Tb's.
Tbh it's all about cost per Tb with reasonable warranty. I have a mix of Seagate, Toshiba and WD Red in current chassis.
Tosh are decent though and do have lower fail rates than Seagates. My own experience is if a drive is gonna fail it'll be in first year typically. I've got 4T nas drives > 10 years old now.

I'm just laothe to fill 4 bays when I'm already looking at retiring al my 2 and 3Tb drives with ... 18-20's. 10 wouldn't last the lifespan with my current content all being reripped (when I get a chance and have the Blueray not DVD) to 4k.
 
Tbh it's all about cost per Tb with reasonable warranty. I have a mix of Seagate, Toshiba and WD Red in current chassis.
Tosh are decent though and do have lower fail rates than Seagates. My own experience is if a drive is gonna fail it'll be in first year typically. I've got 4T nas drives > 10 years old now.

I'm just laothe to fill 4 bays when I'm already looking at retiring al my 2 and 3Tb drives with ... 18-20's. 10 wouldn't last the lifespan with my current content all being reripped (when I get a chance and have the Blueray not DVD) to 4k.
that is why I went with a 5 bay Terramaster. So I could have 2 x 2 pairs and a backup drive. It would allow me to take the first pair out for example, put a fresh pair in an transfer from the 2nd pair onto the new 1st pair. Then I could swap the 2nd pair for the original 1st pair. I could repeat the process to upgrade that second pair. That plan has now gone and I will be building my main server in a Fractal Define R5. It will have that 4 channel card I mention earlier. This will give me 8 SATA connectors (expendable) and 2 NVME slots.
I will be able to add 4 x 12TB pairs into this which will see me off for a very long time.

I have given up on Truenas as the NFS side of it was sub par for my use cases. If you take out the sharing automation from TruNAS the rest of what is remaining is trivial to automate for a Linux Admin like me :D
 
Tbh it's all about cost per Tb with reasonable warranty. I have a mix of Seagate, Toshiba and WD Red in current chassis.
Tosh are decent though and do have lower fail rates than Seagates. My own experience is if a drive is gonna fail it'll be in first year typically. I've got 4T nas drives > 10 years old now.

I'm just laothe to fill 4 bays when I'm already looking at retiring al my 2 and 3Tb drives with ... 18-20's. 10 wouldn't last the lifespan with my current content all being reripped (when I get a chance and have the Blueray not DVD) to 4k.
The main reason I was considering renewed is that I've got 3 x 6Tb renewed HGST SATA drives in my desktop and had them since 2021/202/2023 with no issues (well, until tomorrow now!).

They're currently used for longer-term storage, mainly media files, and are regularly backed up to high capacity external Seagate drives.

Maybe leaning towards the 16Tb IronWolf Pro's now and leaving a couple of empty slots. :unsure:
 
that is why I went with a 5 bay Terramaster. So I could have 2 x 2 pairs and a backup drive. It would allow me to take the first pair out for example, put a fresh pair in an transfer from the 2nd pair onto the new 1st pair. Then I could swap the 2nd pair for the original 1st pair. I could repeat the process to upgrade that second pair. That plan has now gone and I will be building my main server in a Fractal Define R5. It will have that 4 channel card I mention earlier. This will give me 8 SATA connectors (expendable) and 2 NVME slots.
I will be able to add 4 x 12TB pairs into this which will see me off for a very long time.

I have given up on Truenas as the NFS side of it was sub par for my use cases. If you take out the sharing automation from TruNAS the rest of what is remaining is trivial to automate for a Linux Admin like me :D
I have a Define R5 case too! Trivial to fit 12 drives in it!
 
The main reason I was considering renewed is that I've got 3 x 6Tb renewed HGST SATA drives in my desktop and had them since 2021/202/2023 with no issues (well, until tomorrow now!).

They're currently used for longer-term storage, mainly media files, and are regularly backed up to high capacity external Seagate drives.

Maybe leaning towards the 16Tb IronWolf Pro's now and leaving a couple of empty slots. :unsure:
Honestly you won't regret the spare slots. Back in day the price per Tb over 4Tb was riciolous so made sense, but now with 16-24Tb drives all having a reasonable cost per Tb it's crazy to buy under 16Tb in my opinon at least.

The issue is the horriendous restore times, but the spare slots DO allow you to build a second array in parallel as Karl said!

Subscribers  do not see these advertisements

 
The main reason I was considering renewed is that I've got 3 x 6Tb renewed HGST SATA drives in my desktop and had them since 2021/202/2023 with no issues (well, until tomorrow now!).

They're currently used for longer-term storage, mainly media files, and are regularly backed up to high capacity external Seagate drives.

Maybe leaning towards the 16Tb IronWolf Pro's now and leaving a couple of empty slots. :unsure:
HGST are my first choice. My 4TB are from 2012 and 2014. And Backblaze love them.
 
Honestly you won't regret the spare slots. Back in day the price per Tb over 4Tb was riciolous so made sense, but now with 16-24Tb drives all having a reasonable cost per Tb it's crazy to buy under 16Tb in my opinon at least.

The issue is the horriendous restore times, but the spare slots DO allow you to build a second array in parallel as Karl said!
I personally didn't want to go over 8TB per drive. But the sweet spot price wise was 12TB. I risked it.
The restore times are what worry me. But the new plan with the fibre network to do backups to a separate machine using zfs snapshots and send eased my concerns somewhat.
 
This site contains affiliate links for which MHF may be compensated.
I went for it at £118.
 
PS: I know it only has 30 days warranty. If it survives those 30 days then we are in normal territory for these hard drives I think.

It will be mirrored with my existing 12TB ironwolf pro. The chances chances of both going at the same time are low especially as they are different manufacturers and not from the same batch.

I will be immediately "sending" the datasets across to my existing archival (SMR) 8TB drive so I have a live backup. The data is also still on my 2 x 4TB so the risks are very low of me losing all my data.

Subscribers  do not see these advertisements

 
I went for it at £118.
That is a very goor price per Tb. Tempted myself as ex-nas ones usually good (and given the form factor these are 100% ex-nas ones, likely from a netapp. Can you printout the smart data from that drive when it arrives so I have an idea of it's use prior to sale?

If any good I'll use same vendor (may as well learn from your drives!).
 
That is a very goor price per Tb. Tempted myself as ex-nas ones usually good (and given the form factor these are 100% ex-nas ones, likely from a netapp. Can you printout the smart data from that drive when it arrives so I have an idea of it's use prior to sale?

If any good I'll use same vendor (may as well learn from your drives!).
Will do. I expect delivery by Thursday according to Ebay. So I will probably be adding it to the array either Thu or Fri night. Remind me if I forget please.
 
Will do. I expect delivery by Thursday according to Ebay. So I will probably be adding it to the array either Thu or Fri night. Remind me if I forget please.
Yeah, it's either very low use, or very high, Netapp owners tend to swap an array all at once due to capacity (from old works example of going from 2 and 3tb to 6 and 8tb drives). It's better for them to do it a shelf at a time, and it's also better for them to ewaste the drives.

Only warning will be to reformat out ot the stupid netapp sector size if the vendor hasn't done it for you.

(given it's HGST it's certain to have come out of a Netapp or similar vendor array).
 
Yeah, it's either very low use, or very high, Netapp owners tend to swap an array all at once due to capacity (from old works example of going from 2 and 3tb to 6 and 8tb drives). It's better for them to do it a shelf at a time, and it's also better for them to ewaste the drives.

Only warning will be to reformat out ot the stupid netapp sector size if the vendor hasn't done it for you.

(given it's HGST it's certain to have come out of a Netapp or similar vendor array).
If you can handle SAS drives then the 10TB are even better value at £75 each.

https://www.ebay.co.uk/itm/29730602...pid=5339023013&customid=&toolid=10001&mkevt=1
 
This site contains affiliate links for which MHF may be compensated.
I just put the TERRAMASTER D5-300C on classifieds.

Subscribers  do not see these advertisements

 
OK guys Gromett starquake you'd better sit down!

Finalised spec of components after listening to you guys and watching a gazillion videos on nascompare / TerraMaster / etc. Will be going with a TerraMaster F4-423 enclosure with a Samsung Evo 256Gb M.2 NVME (for op system and cache) and two (new:eek: - to get 5 year manufacturer warranty) Toshiba MG10ACA20TE SATA HDD 20Tb for storage. The drives are only just over £30 more expensive than the cheapest refurbed IronWolf Pro 20Tb discs I could find and from a well known supplier I've used before,

Thought if we're gonna do it let's do it right(ish)!

Obviously, the above decision is made on the proviso you don't come back and say, "DON'T BE STOOPIT!"🙏

ps many thanks for al your patience and constructive input:notworthy2:.

pps you do, of course, realise you’ll be holidaying in Scotland soon when we're setting it all up!:whistle2:
 
OK guys Gromett starquake you'd better sit down!

Finalised spec of components after listening to you guys and watching a gazillion videos on nascompare / TerraMaster / etc. Will be going with a TerraMaster F4-423 enclosure with a Samsung Evo 256Gb M.2 NVME (for op system and cache) and two (new:eek: - to get 5 year manufacturer warranty) Toshiba MG10ACA20TE SATA HDD 20Tb for storage. The drives are only just over £30 more expensive than the cheapest refurbed IronWolf Pro 20Tb discs I could find and from a well known supplier I've used before,

Thought if we're gonna do it let's do it right(ish)!

Obviously, the above decision is made on the proviso you don't come back and say, "DON'T BE STOOPIT!"🙏

ps many thanks for al your patience and constructive input:notworthy2:.

pps you do, of course, realise you’ll be holidaying in Scotland soon when we're setting it all up!:whistle2:
Nice one (y) As I said before my only concern with going so big is the resilvering process can be a major risk point.
 
Nice one (y) As I said before my only concern with going so big is the resilvering process can be a major risk point.
Same, but equally with zfs you should be able to recover ;). My only "suggestion" is to split OS and cache duties, cache puts a very high load on the ssd if you use it a lot, I actually spent on a higher spec m.2 SSD (built for database constant write loads) for cache.

The OS disk is hardly used, some places I've worked, use usb drives for OS. (you run VM's on the mirror pair). In fact some high end servers these days have 2 internal USB ports precisely for this.
 
Same, but equally with zfs you should be able to recover ;). My only "suggestion" is to split OS and cache duties, cache puts a very high load on the ssd if you use it a lot, I actually spent on a higher spec m.2 SSD (built for database constant write loads) for cache.

The OS disk is hardly used, some places I've worked, use usb drives for OS. (you run VM's on the mirror pair). In fact some high end servers these days have 2 internal USB ports precisely for this.
If you use a USB drive, ensure the /var/log folder and /tmp folder are stored elsewhere. USB drives have very poor write performance and lifespan.
 
Same, but equally with zfs you should be able to recove
ZFS doesn't avoid the resilvering risk on large drives. It still needs to be done after a drive failure.

For those who don't know what I am talking about.
If you have 2 drives in a mirrored pair for example and one fails.
When you put a replacement drive in the data needs to be recopied (resilvered) across to the new drive.
During this process the remaining original drive is put under 100% load for a long period of time and if that drive fails you lose everything.

Here is a table with resilvering times based on drive size and speed. Remember these are if the system is not doing anything else which will slow things down.

1754415036363.webp


8TB is the sweetspot in many professional's opinion. 18 hours to recopy all the data is a long time for the drive to be under 100% load. I still prefer 4TB but it is just not cost effective anymore especially with the cost of electric and the amount of data.

When you go to a 20TB hard drive you are talking about 2 full days or more.

This resilvering period is a period of time when your data is at serious, serious risk.

Remember if you buy both hard drives at the same time and they are the same model they are likely to have come off the production line at the same time so will have similar defects if any.

If you buy 2 hard drives from different manufacturers you reduce the relative risk a bit because it unlikely for 2 of those drives to fail at the same time.

My personal view is this. don't buy big. But if you do, hedge your bets by buying from 2 different manufacturers and ALWAYS ensure you have 2 backups.

Subscribers  do not see these advertisements

 
oh and just a side note. The term resilvering is used for glass mirrors to describe the process to restore the mirrors reflective surface.

So restoring a mirror in both raid and glass mirrors was called re-silvering. The term extended to the rebuilding of other raid levels as well over time.
 
If you use a USB drive, ensure the /var/log folder and /tmp folder are stored elsewhere. USB drives have very poor write performance and lifespan.
Indeed, you can move these onto the permanent volume (which the zfs mounts as system boots).

I usually also burn a spare USB system drive (or run raid on 2, which is more tricky).

I'm rebuilding as mentioned (likely to proxmox, or maybe home grown linux on debian as I ran before Hyper-V). My experiment since 2017 on Storage Spaces has worked (it's vertainally faster than my last linux system using md raid devices, which I moved away from as zfs was ... unstable on debian at that time). I'm now running zfs on debian inside vm's and it's a lot more reliable though reliant at moment on the underlying storage spaces disk io.

I'm actually considering the same case as Gellyneck as I really don't need the Xeon CPU I have now for the cpu load, and it'll allow me to then rebuild the WIndows Storage spaces over to Proxmox and change drives in that case for some addiitonal space. Removing 12 drives down to (potentially) 4x18's would break even in power consumption by my figures 25-30p a day (it's about 40W extra to run 12 drives instead of 4, when NOT accessing data, more when active), making a rough 24 hour figure about 960W a day, or 1Kwh, or 365 times .25, so about £100 a year, or £300 over 3 years in power reduction in power saved on current server).
Ie, buying a new NAS actually makes some sense on the power use. When you take into account the modern NAS uses less power on the cpu too, it's likely self-funding in power savings in effect.

You are right though on the resilvering times being horrid though.
 
Indeed, you can move these onto the permanent volume (which the zfs mounts as system boots).

I usually also burn a spare USB system drive (or run raid on 2, which is more tricky).

I'm rebuilding as mentioned (likely to proxmox, or maybe home grown linux on debian as I ran before Hyper-V). My experiment since 2017 on Storage Spaces has worked (it's vertainally faster than my last linux system using md raid devices, which I moved away from as zfs was ... unstable on debian at that time). I'm now running zfs on debian inside vm's and it's a lot more reliable though reliant at moment on the underlying storage spaces disk io.

I'm actually considering the same case as Gellyneck as I really don't need the Xeon CPU I have now for the cpu load, and it'll allow me to then rebuild the WIndows Storage spaces over to Proxmox and change drives in that case for some addiitonal space. Removing 12 drives down to (potentially) 4x18's would break even in power consumption by my figures 25-30p a day (it's about 40W extra to run 12 drives instead of 4, when NOT accessing data, more when active), making a rough 24 hour figure about 960W a day, or 1Kwh, or 365 times .25, so about £100 a year, or £300 over 3 years in power reduction in power saved on current server).
Ie, buying a new NAS actually makes some sense on the power use. When you take into account the modern NAS uses less power on the cpu too, it's likely self-funding in power savings in effect.

You are right though on the resilvering times being horrid though.
I have moved my Ryzen 3600 to be my NAS system. I am swapping out the 3600 for a 5600GT so that I can take the NVidia GTX1660 super out of the case and reduce power usage.
The Ryzen 5600GT is more than capable of doing live 4K UHD transcoding for Jellyfin.

I run Proxmox on the server. I did have a VM running TrueNAS with the hard drives passed through. But TrueNAS gave me very little of use as I am perfectly capable of typing in or scripting ZFS commands myself. And the problems I had with it were frustrating.
So I got rid of the TrueNAS VM and moved the ZFS Pools to the hardware level. This makes stuff so much easier. Before, I had to share the datasets from the TrueNAS VM with the host, then share again back to the other Containers/VM's such as Jellyfin that needed access. The NFS mounts were seeing around a 20MB a second transfer rate max. Now I am getting full hardware rate of around 120MB/s I can also use bind mounting.

Normally I would just run native Linux KVM/Libvirt. But proxmox just makes it so easy to manage both LXC containers and VM's. It doesn't do Docker which is a shame. So I have a VM set up just for my Docker needs.
 
Indeed, you can move these onto the permanent volume (which the zfs mounts as system boots).

I usually also burn a spare USB system drive (or run raid on 2, which is more tricky).

I'm rebuilding as mentioned (likely to proxmox, or maybe home grown linux on debian as I ran before Hyper-V). My experiment since 2017 on Storage Spaces has worked (it's vertainally faster than my last linux system using md raid devices, which I moved away from as zfs was ... unstable on debian at that time). I'm now running zfs on debian inside vm's and it's a lot more reliable though reliant at moment on the underlying storage spaces disk io.

I'm actually considering the same case as Gellyneck as I really don't need the Xeon CPU I have now for the cpu load, and it'll allow me to then rebuild the WIndows Storage spaces over to Proxmox and change drives in that case for some addiitonal space. Removing 12 drives down to (potentially) 4x18's would break even in power consumption by my figures 25-30p a day (it's about 40W extra to run 12 drives instead of 4, when NOT accessing data, more when active), making a rough 24 hour figure about 960W a day, or 1Kwh, or 365 times .25, so about £100 a year, or £300 over 3 years in power reduction in power saved on current server).
Ie, buying a new NAS actually makes some sense on the power use. When you take into account the modern NAS uses less power on the cpu too, it's likely self-funding in power savings in effect.

You are right though on the resilvering times being horrid though.
On the enclosure you might want to have a squint at their F4-424 as well. A bit more powerful, easier installation of ssd's (if you're going to use them) and comes with TOS 6 (again, if you're going to use that - probably not though).
 
Normally I would just run native Linux KVM/Libvirt. But proxmox just makes it so easy to manage both LXC containers and VM's. It doesn't do Docker which is a shame. So I have a VM set up just for my Docker needs.
This point is why I'll likely go proxmox too. I used to use Native Linux KVM in 2017 on the last iteration of nas (2012-2017). However the performance was severaly lackign at that time due to a fundamental error on my part ( I didn't match the drive sector size to the formatted lvm partitions meaning every write resulted in 4 on disk with the raid, slowing peak write iops SUBTANTIALLY.. Made good drives perform like 20-30Mbyte/sec. However the big issue is I didn't have spare disks at the time to allow me to move the live data to another box to reformat to fix it as I had migrated these (and the data) in before I spotted the fundamental issue!).
At moment I have 3 vm's running docker, lxc (2 docker, one lxc) containers which run the system, I plan to move these as-is to proxmox, but obviously jsut move the lxc containers from a vm to running native.

The issue with native KVM is with lvm partitions that if you make one error like sector size, theres no "safety" -> when formatting on windows at least it says "that sector size is unwise", where KVM lets you make silly mistakes. ZFS also has some help on this.

When I built the windows based box I fixed all the sector sizes, and the ssd cache tier massively improved iops under (heavy) load, and all the old drives returned to their normal 120-160Mbyte/sec performance as they were not writing every block 4 times.
I can reluctantly reccomend Hyper-V too for virtualisation, the performance is great (beats KVM), its just I wish the windows licenses didn't cost me as much as they do).. ( I do have support tickets available etc). Old work used to be a huge Vmware shop, but with the licensing changes to that recently I can't reccomend it -> hence why proxmox seems best to move to now.

Subscribers  do not see these advertisements

 
This site contains affiliate links for which MHF may be compensated.
Oh dear. I had a couple of drinks and just got curious about the state of bulk purchases of DVD's etc.
If the top layer on that pic is indicative of whats below, bargain - Only thing I'd say if you hoping to rip all those, you going to need a fair few (more) drives given the time to rip each is what 20 mins.

Even with 3 DVD drives and 2 BR, mine took an age. I've now took to just downloading full BR/DVD RIP's (ie, uncompressed) of the stuff I own physically as it's quicker with the 1G internet than ripping. 1g Internet = 125Mbyte/sec, 8 secs a gig so a DVD in under a minute. Ripping it takes around 20 mins.

The things that I value though above all else isn't the movie disks, it's the high quality series from things that are not on the streaming services in similar quality. Theres been quite a few (comedy) series that I like which have been deleted from existance on streaming from being too controversial today.
 
Can I just ask, what on earth do you guys need so much storage space for? Sounds to me as a layman you could be running Amazons cloud services.
 
If the top layer on that pic is indicative of whats below, bargain - Only thing I'd say if you hoping to rip all those, you going to need a fair few (more) drives given the time to rip each is what 20 mins.

Even with 3 DVD drives and 2 BR, mine took an age. I've now took to just downloading full BR/DVD RIP's (ie, uncompressed) of the stuff I own physically as it's quicker with the 1G internet than ripping. 1g Internet = 125Mbyte/sec, 8 secs a gig so a DVD in under a minute. Ripping it takes around 20 mins.

The things that I value though above all else isn't the movie disks, it's the high quality series from things that are not on the streaming services in similar quality. Theres been quite a few (comedy) series that I like which have been deleted from existance on streaming from being too controversial today.

I work on my computer day in day out 12+ hours a day minimum. I have a little script and use makemkvcon. It dinks when one is done and I just put the next one in.
Although I will see how it goes, I may be tempted to buy an other couple of DVD players dedicated just to DVD work and save my good blu ray one for blue ray rips.
I can virtualise the task very easily.

I did the entire 20 seasons of Last of the summer wine plus a couple of UHD blu rays in one work day.

I will just plink away at it. No big panic. I am really hoping there are loads of box sets of TV series in there.

I got an offer for £180 and took it. So I won't have to sell to many at a car boot to get my initial outlay back and as I have a decent size workshop storage is not an issue :)
 
This site contains affiliate links for which MHF may be compensated.

Join us or log in to post a reply.

To join in you must be a member of MotorhomeFun

Join MotorhomeFun

Join us, it quick and easy!

Log in

Already a member? Log in here.

Latest journal entries

Back
Top