(Part 2) Top products from r/zfs
We found 24 product mentions on r/zfs. We ranked the 51 resulting products by number of redditors who mentioned them. Here are the products ranked 21-40. You can also go back to the previous section.
21. HIGHFINE Universal 9.5mm SATA to SATA 2nd SSD HDD Hard Drive Caddy Adapter Tray Enclosures for DELL HP Lenovo ThinkPad ACER Gateway ASUS Sony Samsung MSI Laptop
Sentiment score: 0
Number of reviews: 1
100% Brand New and High QualityAdd a 2nd drive to your laptop by replacing your optical drive(CD/DVD-ROM)This device accepts 2.5 inch 9.5mm / 7mm high or less SATA HDD / SSD.Compatible with: DELL HP LENOVO ThinkPad ACER Gateway ASUS SONY SAMSUNG MSI Laptop which has a 9.5mm high CD/DVD-ROM driver.Pa...
22. SilverStone Technology Premium Mini-Itx/DTX Small Form Factor NAS Computer Case, Black DS380B-USA Newest Version (SST-DS380B-USA)
Sentiment score: 2
Number of reviews: 1
Support 12 total drives with 8 hot-swappable 3.5" Or 2.5" Sas/SATA and 4 fixed 2.5" DrivesUnbelievable storage space and versatility for small form FactorPremium brushed aluminum front doorSupport Graphics card up to 11" With supporter design from tj08-eLockable power button design and adjustable LE...
23. Antec Sonata Proto Black ATX Mid Tower Computer Case
Sentiment score: -1
Number of reviews: 1
9Drive bays and 7 expansion slotswashable removable intake air filterswitch controlled 120mm rear TwoCool fanHD Audio connectorsFront I/O Panel: 2x USB 2.0 ports; 1x Speaker; 1x Mic Power Supply: None RoHS Compliant
24. The Design and Implementation of the FreeBSD Operating System (2nd Edition)
Sentiment score: 0
Number of reviews: 1
Addison-Wesley Professional
25. ZFS on Linux: Internals and Administration
Sentiment score: 0
Number of reviews: 1
26. FreeBSD Mastery: ZFS (IT Mastery) (Volume 7)
Sentiment score: 0
Number of reviews: 1
27. FreeBSD Mastery: Advanced ZFS (IT Mastery) (Volume 9)
Sentiment score: 0
Number of reviews: 1
28. 3WARE Cable Multi-lane Internal Cable (SFF-8087)
Sentiment score: 1
Number of reviews: 1
Length is 0.5mConnects the controller`s SFF-8087 Multi-lane connector(s) to the drives` or backplane`s discrete SATA connector(s)It combines the RAID controller’s multiple SAS/SATA ports into single locked connections.Model -- CBL-SFF8087OCF-05MType -- Cable InternalDescripiton -- Connects the con...
29. Qlogic Qle4060C-Ck Qlogic 1Gb-Pcie-Iscsi Single Port Hba
Sentiment score: 1
Number of reviews: 1
Type: Fibre Channel Host Bus AdapterHost Interface: PCI Express
30. RPC-2008 2U Server Case w/ 8 Hot-Swappable SATA/SAS Drive Bay
Sentiment score: 0
Number of reviews: 1
2U rackmount design,8 x hot-swappable SATA/SAS drive bays, 1 x slim CD-ROM bay, 1 x FDD bay,4 x 80mm middle fans,Switch:Power ON/OFF x 1, System reset x 1 ,Indicator:Power ON/OFF x 1, HDD x 1, NETWORK X 2,Connector:One front accessible USB portMotherboard Compatibility: Support EEB (12"x13"), CEB(12...
31. LSI Logic SAS9200-8E 8PORT Ext 6GB Sata+SAS Pcie 2.0
Sentiment score: 1
Number of reviews: 1
Compatibility: SAS ControllerPackaged Quantity: PCI Express x8
32. Supermicro Intel X58 DDR3 800 LGA 1366 Motherboards X8DTE-F-O
Sentiment score: 0
Number of reviews: 1
CPU: Dual LGA1366 Sockets Support Quad-Core Intel Xeon Processor 5500 sequence(Nehalem-EP processor); QPI upto 6.4 GT/sChipset:Intel 5520(Tylersburg) & ICH10R + IOH-36DMemory: 12x 240-pin DDR3-1333/1066/800 DIMM, Supports upto 96GB ECC/REG memory Or upto 24GB ECC/Unbuffered memorySlots: 4x PCI-Expre...
33. Configuration and Capacity Planning for Solaris Servers
Sentiment score: 0
Number of reviews: 1
34. StarTech.com 2 Port SATA 6 Gbps PCI Express eSATA Controller Card - Storage Controller - 2 Channel - eSATA 6Gb/s - 6 Gbit/s - PCIe - PEXESAT32
Sentiment score: 1
Number of reviews: 1
Add Two eSATA 3.0 (6Gbps) Ports for High Speed Access to Large External Storage SolutionsPCI Express eSATASATA CardSATA 6 Gbps ControllerPCI-e Dual eSATA2 Port SATA 6 Gbps PCI Express eSATA Controller CardIncludes Full and Low Profile Bracket
35. AeroPress Coffee and Espresso Maker - Quickly Makes Delicious Coffee Without Bitterness - 1 to 3 Cups Per Pressing
Sentiment score: 1
Number of reviews: 1
Popular with coffee enthusiasts worldwide, the patented AeroPress is a new kind of coffee press that uses a rapid, total immersion brewing process to make smooth, delicious, full flavored coffee without bitterness and with low acidity.Good-bye French Press! The rapid brewing AeroPress avoids the bit...
36. Norco DS-12D External 2U 12 Bay Hot-Swap SAS/SATA Rackmount JBOD Enclosure
Sentiment score: 0
Number of reviews: 1
12x hot-swappable SATA II, III/SAS 6G drive bays3x SFF-8088 external connectors, each SFF-8088 port support 4 SAS or SATA drives;LED indicators for power and activity on each HDD trayBackplanes are horizontal mounted for better ventilationRoHS Compliant, OS Independent, Come with full range power su...
37. HP 900GB 6G SAS 10K 900 SAS 16 MB Cache 2.5-Inch Internal Bare or OEM Drives 619291-B21
Sentiment score: -1
Number of reviews: 1
Hard drive capacity - 900 GBHard drive size - 63.5 mm (2.5)Hard drive rotational speed - 10000 RPM
38. I/O CREST 2 Port SATA III PCI-e 2.0 x1 Controller Card Asmedia ASM1061 Non-Raid with Low Profile Bracket SY-PEX40039
Sentiment score: 0
Number of reviews: 1
We recommend a fresh Windows install with this cardDrivers are required for this card to function.ASM1061 Chipset (Asmedia 1061 SATA Host Controller)Supports Hot Plug and Hot SwapSupports Communication Speeds of 6.0Gbps, 3.0Gbps, and 1.5Gbps, 2 Ports Serial ATA, Native Command Queue (NCQ), Port Mult...
39. Samsung Memory M393B2K70CM0-CF8 16GB DDR3 1066 ECC Registered Bare
Sentiment score: 0
Number of reviews: 1
SAMSUNG PART# M393B2K70CM0-CF8PC3-8500R DDR3 1066 16GB ECC REG 4RX4FOR SERVER ONLY - NOT FOR DESKTOP SYSTEMS
40. LSI Logic SAS 9207-8i Storage Controller LSI00301
Sentiment score: 0
Number of reviews: 1
8 internal 6 Gb/s SATA + SAS portsLow-profile form-factor designSupports up to 256 SAS or SATA end devicesSupports SSDs, HDDs, and tape devicesFusion-Mpt 2.0 Architecture Can Achieve More Than 700,000 I/Os Per SecondSupports Major Operating SystemsRoHS compliant
Linking OP's problem here...
Chances are 9/10 that the CPU is not "busy", but instead bumping up against a mutex lock. Welcome to the world of high-performance ZFS, where pushing forward the state-of-the-art is often a game of mutex whac-a-mole!
Here's the relevant CPU note from the post:
> did a perf top and it shows most of the kernel time spent in _raw_spin_unlock_irqrestore in z_wr_int_4 and osq_lock in z_wr_iss.
Seeing "lock" in the name of any kernel process is often a helpful clue. So let's do some research: what is "z_wr_iss"? What is "osq_lock"?
I decided to pull down the OpenZFS source code and learn by searching/reading. Lots more reading than I can outline here.
txgsync: ~/devel$ git clone https://github.com/openzfs/openzfs.git
txgsync: ~/devel$ cd openzfs/
txgsync: ~/devel/openzfs$ grep -ri z_wr_iss
txgsync: ~/devel/openzfs$ grep -ri osq_lock
Well, that was a bust. It's not in the upstream OpenZFS code. What about the zfsonlinux code?
txgsync: ~/devel$ git clone https://github.com/zfsonlinux/zfs.git
txgsync: ~/devel$ cd zfs
txgsync: ~/devel/zfs$ grep -ri z_wr_iss
txgsync: ~/devel/zfs$ grep -ri osq_lock
Still no joy. OK, time for the big search: is it in the Linux kernel source code?
txgsync: ~/devel$ cd linux-4.4-rc8/
txgsync: ~/devel/linux-4.4-rc8$ grep -ri osq_lock
Time for a cup of coffee; even on a pair of fast, read-optimized SSDs, digging through millions of lines of code with "grep" takes several minutes.
include/linux/osq_lock.h:#ifndef LINUX_OSQ_LOCK_H
include/linux/osq_lock.h:#define LINUX_OSQ_LOCK_H
include/linux/osq_lock.h:#define OSQ_LOCK_UNLOCKED { ATOMIC_INIT(OSQ_UNLOCKED_VAL) }
include/linux/osq_lock.h:static inline void osq_lock_init(struct optimistic_spin_queue lock)
include/linux/osq_lock.h:extern bool osq_lock(struct optimistic_spin_queue lock);
include/linux/rwsem.h:#include <linux/osq_lock.h>
include/linux/rwsem.h:#define __RWSEM_OPT_INIT(lockname) , .osq = OSQ_LOCK_UNLOCKED, .owner = NULL
include/linux/mutex.h:#include <linux/osq_lock.h>
kernel/locking/Makefile:obj-$(CONFIG_LOCK_SPIN_ON_OWNER) += osq_lock.o
kernel/locking/rwsem-xadd.c:#include <linux/osq_lock.h>
kernel/locking/rwsem-xadd.c: osq_lock_init(&sem->osq);
kernel/locking/rwsem-xadd.c: if (!osq_lock(&sem->osq))
kernel/locking/mutex.c:#include <linux/osq_lock.h>
kernel/locking/mutex.c: osq_lock_init(&lock->osq);
kernel/locking/mutex.c: if (!osq_lock(&lock->osq))
kernel/locking/osq_lock.c:#include <linux/osq_lock.h>
kernel/locking/osq_lock.c:bool osq_lock(struct optimistic_spin_queue lock)
For those who don't read C well -- and I number myself among that distinguished group! -- here's a super-quick primer: if you see a file with ".h" at the end of the name, that's a "Header" file. Basically, it defines variables that are used elsewhere in the code. It's really useful to look at headers, because often they have helpful comments to tell you what the purpose of the variable is. If you see a file with ".c" at the end, that's the code that does the work rather than just defining stuff.
It's z_wr_iss that's driving the mutex lock; there's a good chance I can ignore the locking code itself (which is probably fine; at least I hope it is, because ZFS on Linux is probably easier to push through a fix than core kernel IO locking semantics) if I can figure out why we're competing over the lock (which is the actual problem). Back to grep...
txgsync: ~/devel/linux-4.4-rc8$ grep -ri z_wr_iss
MOAR COFFEE! This takes forever. Next hobby project: grok up my source code trees in ~devel; grep takes way too long.
...
...
And the search came up empty. Hmm. Maybe _iss is a structure that's created only when it's running, and doesn't actually exist in the code? I probably should understand what I'm pecking at a little better. Let's go back to the ZFS On Linux code:
mbarnson@txgsync: ~/devel/zfs$ grep -r z_wr
module/zfs/zio.c: "z_null", "z_rd", "z_wr", "z_fr", "z_cl", "z_ioctl"
Another clue! We've figured out the Linux Kernel name of the mutex we're stuck on, and that z_wr is a structure in "zio.c". Now this code looks pretty familiar to me. Let's go dive into the ZFS On Linux code and see why z_wr might be hung up on a mutex lock of type "_iss".
txgsync: ~/devel/zfs$ cd module/zfs/
txgsync: ~/devel/zfs/module/zfs$ vi zio.c
z_wr is a type of IO descriptor:
/
const char zio_type_name[ZIO_TYPES] = {
"z_null", "z_rd", "z_wr", "z_fr", "z_cl", "z_ioctl"
};
What about that z_wr_iss thing? And competition with z_wr_int_4? I've gotta leave that unanswered for now, because it's Saturday and I have a lawn to mow.
It seems there are a few obvious -- if tentative -- conclusions:
It's just a hypothesis, but I think it may have some legs and needs to be ruled out before other causes can be ruled in.
I was willing to dive into this a bit because I'm in the midst of some similar tests myself, and am also puzzled why the IO performance of Solaris zones so far out-strips ZFSoL under Xen; even after reading Brendan Gregg's explanation of Zones vs. KVM vs. Xen I obviously don't quite "get it" yet. I probably need to spend more time with my hands in the guts of things to know what I'm talking about.
TL;DR: You're probably tripping over a Linux kernel mutex lock that is waiting on a Xen ring buffer polling cycle; this might not have much to do with ZFS per se. Debugging Xen I/O scheduling is hard. Please file a bug.
ADDENDUM: The Oracle Cloud storage is mostly on the ZFS Storage Appliances. Why not buy a big IaaS instance from Oracle instead and know that it's ZFS under the hood at the base of the stack? The storage back-end systems have 1.5TB RAM, abundant L2ARC, huge & fast SSD SLOG, and lots of 10K drives as the backing store. We've carefully engineered our storage back-ends for huge IOPS. We're doubling-down on that approach with Solaris Zones and Docker in the Cloud with Oracle OpenStack for Solaris and Linux this year, and actively disrupting ourselves to make your life better. I administer the architecture & performance of this storage for a living, so if you're not happy with performance in the Oracle Cloud, your problem is right in my wheelhouse.
Disclaimer: I'm an Oracle employee. My opinions do not necessarily reflect those of Oracle or its affiliates.
Best thing to do is to buy a new case. Either this https://www.amazon.com/SilverStone-Technology-Mini-Itx-Computer-DS380B-USA/dp/B07PCH47Z2/ref=sr_1_15?keywords=silverstone+hotswap&qid=1566943919&s=gateway&sr=8-15 Which a quite a lot of folks I know who are using mini iTX are using something like this. 8 hotswap 3.5 and 4 x 2.5 https://www.silverstonetek.com/product.php?pid=452 or if you want to use ALL your drives and a cheaper alternative https://www.amazon.com/dp/B0091IZ1ZG/ref=twister_B079C7QGNY?_encoding=UTF8&th=1 You can fit 15 x 3.5 in that. or get some 2x2.5 to 1x3.5 to shove some SSDs in there too. https://www.amazon.com/Inateck-Internal-Mounting-Included-ST1002S/dp/B01FD8YJB4/ref=sr_1_11?keywords=2.5+x+3.5&qid=1566944571&s=electronics&sr=1-11 There are various companies I looked quickly on Amazon. That way you can have 12 drives rather than just 6. The cheap sata cards will fix you up or shove this in there https://www.amazon.com/Crest-Non-RAID-Controller-Supports-FreeNAS/dp/B07NFRXQHC/ref=sr_1_1?keywords=I%2FO+Crest+8+Port+SATA+III+Non-RAID+PCI-e+x4+Controller+Card+Supports+FreeNAS+and+ZFS+RAID&qid=1566944762&s=electronics&sr=1-1 . Hope this helps :)
Can you link me to a good example? Preferably one suited for a homelab, ie not ridicu-enterprise-priced to the max? This is something I'd like to play with.
edit: is something like this a good example? How is the initial configuration done - BIOS-style interface accessed at POST, or is a proprietary application needed in the OS itself to configure it, or...?
Current: (6-1) x 4 TB = 20 TB
New:
(3-1) x 6 TB = 12 TB
(3-1) x 4 TB = 8 TB
20 TB total
You don't gain any space by doing this, though you do prepare for the future.
Are you able to add more drives to your system, perhaps externally? I've personally used these Mediasonic 4-bay enclosures along with an eSATA controller (though the enclosures also support USB3). Get some black electrical tape though, because the blue lights on the enclosure are brighter than the sun. The only downside with port-splitter enclosures is that if one drive fails and knocks out the SATA bus, the other 3 drives will drop offline too. The infamous 3 TB Seagates did that, but I had other drives (both 3 TB WD and 2 TB Seagates) fail without interfering with the other drives. Nothing was permanently damaged; just had to remove the failed drive before the other 3 started working again. Also, the enclosure is not hot-swap; you have to power down to replace drives. But hey, it's $99 for 4 drive bays.
6 TB Red drives are $200 right now ($33/TB); 8 TB are $250 ($31/TB), and 10 TB are $279 ($28/TB).
Instead of spending $600 (three 6 TB drives) and getting nothing, spend $672 ($558 for two 10 TB drives, $100 for enclosure, $30 for controller, $4 for black electrical tape) and get +10 TB by adding a pair of 10 TB drives in a mirror in an enclosure, and have another 2 bays free for future expansion.
(6-1) x 4 TB = 20 TB
(2-1) x 10 TB = 10 TB
30 TB total, $668 for +10 TB
Later buy another two 10 TB drives and put them in the two empty slots:
(6-1) x 4 TB = 20 TB
(2-1) x 10 TB = 10 TB
(2-1) x 10 TB = 10 TB
40 TB total, $558 for +10 TB
Then in the future you only have to upgrade two drives at a time, and you can replace your smallest drives with the now-replaced drives.
You can repeat this with a second enclosure, of course. :)
Don't forget that some of your drives will fail outside of warranty, which can speed your replacement plans. If a 4 TB drive fails, go ahead and replace it with a 10 TB drive. You won't see any immediate effect, but you'll turn that 20 TB RAIDz1 into 50 TB that much quicker.
Oh, and make sure you've set your recordsize to save some space! For datasets where you're mainly storing large video files, set your recordsize to 1 MB: "zfs set recordsize=1M poolname/datasetname". This only takes effect on new writes, so you'd have to re-write your existing files to see any difference. You can rewrite files with "cp -a filename tmpfile; mv tmpfile filename" for all files, or a much easier way is just create a new dataset with the proper recordsize, move all files over, then delete the old dataset and rename the new dataset.
See this spreadsheet. With 6 disks in RAIDz1 and the default 128K record size (16 sectors on the chart) you're losing 20% to parity. With 1M record size (256 sectors on the chart) you're losing only 17% to parity. 3% for free!
https://www.reddit.com/r/zfs/comments/9pawl7/zfs_space_efficiency_and_performance_comparison/
https://www.reddit.com/r/zfs/comments/b931o0/zfs_recordsize_faq/
--I use an old quad-core i3 laptop with a 2-port eSATA Expresscard to connect the 4-bay Probox. Can connect it with a USB3 Expresscard as well, but I don't trust that configuration. I was also able to connect it to an older motherboard that had SATA port expansion with an internal-to-external SATA cable.
&#x200B;
3FT eSATA to SATA male to male M/M Shielded Extender Extension HDD Cable 6Gbps
&#x200B;
--If I need quicker scrub times, I can take the drives and put them in a 5-bay Sans Digital HDDRACK5 with a PC power supply, and hook them up to one of my SAS cards in the tower server I had built from Fry's a few years ago. It's LSI2008 with the cables routed externally.
&#x200B;
Cable: External Mini SAS 26pin (SFF-8088) Male to 4x 7Pin Sata Cable
Cards: SAS9200-8E 8PORT Ext 6GB Sata+sas Pcie 2.0
Fan card: Titan Adjustable Dual Fan PCI Slot VGA Cooler (TTC-SC07TZ)
&#x200B;
--Sorry for the late reply, BTW - haven't checked the forum for a few days.
Thanks for the reply something like this?
https://www.amazon.com/3WARE-Cable-Multi-lane-Internal-SFF-8087/dp/B000FBYS2U
&#x200B;
that cable will connect to only the 1 port on the dell h200 perc and will connect to the SAS back panel using those 4 cables?
I'd really recommend these two books for high-level administration of ZFS:
https://www.amazon.com/dp/1642350001/
https://www.amazon.com/dp/164235001X/
And the other one I linked has one chapter that gets into the low-level workings of ZFS:
https://www.amazon.com/dp/0321968972/
Thanks so much for all this!
I had found the memory and controller card below in the interim.
https://www.amazon.com/Tech-PC3-12800-PowerEdge-A3721494-Snpp9rn2c/dp/B01C7YS08U
https://www.amazon.com/LSI-Logic-9207-8i-Controller-LSI00301/dp/B0085FT2JC
I think these will work. What do you think?
On this build I probably won't try to get a slog for the zil but in the future I may if we test and can hook these up to our vm hosts. Do you have any recommendations for that? I know NFS does sync writes so I think I'll need a slog if I do that.
Something like this maybe:
Norco DS-12D External 2U 12 Bay Hot-Swap SAS/SATA Rackmount JBOD Enclosure https://www.amazon.com/dp/B004IXYCOA/ref=cm_sw_r_cp_apa_i_XH5NDb68EB26F
https://www.amazon.com/gp/product/154462204X/ref=oh_aui_detailpage_o01_s00?ie=UTF8&amp;psc=1
If interested in some info on ZFS on Linux - its not a huge book, but its very technical in parts.
That's why i bought a New-Older motherboard that uses DDR3.
My SuperMicro X8DTE-F-O is a MF Beast with 192GB of RAM.
Each 16GB stick of RAM is $40
Is the one of the MicroServers with the slimline laptop DVD drive or the full size 5.25" one?
If it's the latter - and you've got an appropriate HBA installed - you can get up to eight 2.5" 7mm SSDs in its place: https://www.amazon.com/dp/B00TL4US8K
If not, you can swap the DVD drive at least for one extra, using those adapters people used to use for two drives in laptops: https://www.amazon.com/dp/B01MRI8YFN
I've used PCI-e to SATA cards before too; although I guess you could argue these controllers are sort of HBAs. https://www.amazon.com/gp/product/B005B0A6ZS