(Part 3) Top products from r/zfs

Jump to the top 20

We found 13 product mentions on r/zfs. We ranked the 51 resulting products by number of redditors who mentioned them. Here are the products ranked 41-60. You can also go back to the previous section.

Next page

Top comments that mention products on r/zfs:

u/zfsbest · 2 pointsr/zfs

I did a bit of searching on your behalf and obviously I haven't tested it, (so please don't hold me responsible) but this looks like 99% the same thing as the Probox:

https://www.amazon.co.uk/RaidSonic-ICY-BOX-IB-3640SU3-drive/dp/B009DH5Q2S/ref=sr_1_35?ie=UTF8&qid=1504622540&sr=8-35&keywords=4+bay+esata

RaidSonic ICY BOX IB-3640SU3 - hard drive array
The Icybox External 4-bay JBOD enclosure for 4x 3.5” SATA l/ll/lll HDDs easy assembling by tray less design, HDD capacity unlimited, supports: Windows XP/Vista/Win7, MAC OS X. Plug & Play and Hot Swap. JBOD (Just a Bunch of Discs) JBOD, USB 3.0, eSATA

The reviews aren't too bad either from what I saw, so please let us know if you get one and it works well for you. :)

u/Fiberton · 2 pointsr/zfs

Best thing to do is to buy a new case. Either this https://www.amazon.com/SilverStone-Technology-Mini-Itx-Computer-DS380B-USA/dp/B07PCH47Z2/ref=sr_1_15?keywords=silverstone+hotswap&qid=1566943919&s=gateway&sr=8-15 Which a quite a lot of folks I know who are using mini iTX are using something like this. 8 hotswap 3.5 and 4 x 2.5 https://www.silverstonetek.com/product.php?pid=452 or if you want to use ALL your drives and a cheaper alternative https://www.amazon.com/dp/B0091IZ1ZG/ref=twister_B079C7QGNY?_encoding=UTF8&th=1 You can fit 15 x 3.5 in that. or get some 2x2.5 to 1x3.5 to shove some SSDs in there too. https://www.amazon.com/Inateck-Internal-Mounting-Included-ST1002S/dp/B01FD8YJB4/ref=sr_1_11?keywords=2.5+x+3.5&qid=1566944571&s=electronics&sr=1-11 There are various companies I looked quickly on Amazon. That way you can have 12 drives rather than just 6. The cheap sata cards will fix you up or shove this in there https://www.amazon.com/Crest-Non-RAID-Controller-Supports-FreeNAS/dp/B07NFRXQHC/ref=sr_1_1?keywords=I%2FO+Crest+8+Port+SATA+III+Non-RAID+PCI-e+x4+Controller+Card+Supports+FreeNAS+and+ZFS+RAID&qid=1566944762&s=electronics&sr=1-1 . Hope this helps :)

u/txgsync · 1 pointr/zfs

> Debugging performance issues is hard.

Absolutely. "It's hard to do" is why I have a job :-) The best short primer I've ever read on how to troubleshoot host/VM performance issues is Brendan Gregg's post on the USE method. Another great resource is Brian L. Wong's 1997 "Configuration and Capacity Planning for Solaris Servers"; I often laugh because the problems in the modern Cloud are often just the problems of any application, magnified by increased speed & parallelization, and Brian's twenty-year-old tome holds up remarkably well if you want to prevent major capacity/performance issues.

> Linux kernel has a cscope target...

I did not know that. That's probably what I should have used; they even have a handy tutorial for getting started using it for large projects.

> I was running OpenGrok on local projects/branches, but having it web only was not that great.

Yeah, I use and abuse Grok hard every workday, that's why I naturally gravitated toward it. But cscope might be the right tool for the job. Thanks!

u/qupada42 · 2 pointsr/zfs

Fair enough. How about this one then? One SSD plus a slimline drive in the space the full size drive occupies.

https://www.amazon.com/dp/B00JYMCFXA

Or do they not have enough SATA ports to hook all of that up? Been a while since I used one of those machines.

u/old63 · 1 pointr/zfs

Thanks so much for all this!

I had found the memory and controller card below in the interim.
https://www.amazon.com/Tech-PC3-12800-PowerEdge-A3721494-Snpp9rn2c/dp/B01C7YS08U

https://www.amazon.com/LSI-Logic-9207-8i-Controller-LSI00301/dp/B0085FT2JC

I think these will work. What do you think?

On this build I probably won't try to get a slog for the zil but in the future I may if we test and can hook these up to our vm hosts. Do you have any recommendations for that? I know NFS does sync writes so I think I'll need a slog if I do that.

u/monoslim · 1 pointr/zfs

Something like this maybe:

Norco DS-12D External 2U 12 Bay Hot-Swap SAS/SATA Rackmount JBOD Enclosure https://www.amazon.com/dp/B004IXYCOA/ref=cm_sw_r_cp_apa_i_XH5NDb68EB26F

u/Liwanu · 2 pointsr/zfs

That's why i bought a New-Older motherboard that uses DDR3.
My SuperMicro X8DTE-F-O is a MF Beast with 192GB of RAM.
Each 16GB stick of RAM is $40

u/wannabesq · 1 pointr/zfs

You mean like this?

Though, if either the HDD or SSD portion fails, you'd have to replace the whole thing.

Only works in Windows though, but the potential is there.

u/dailytraffic · 1 pointr/zfs

I've used PCI-e to SATA cards before too; although I guess you could argue these controllers are sort of HBAs. https://www.amazon.com/gp/product/B005B0A6ZS

u/mercenary_sysadmin · 15 pointsr/zfs

The wider your vdev, the longer resilvers will take and (for the same reason) the lower your IOPS will be. Really wide rust stripes can end up requiring weeks to scrub or resilver, with significant performance issues during scrubbing or resilvering due to the low IOPS involved.

> I have 14 disks 900G each

Why? That's an incredible amount of power consumption and initial expense, just to end up with a vdev that would get stomped into the dirt by a pair of 10T mirror vdevs.

edit: new HGST He10 4 = $1320; HP 900GB 2.5" disk 4 = $1540