507 private links
It’s not exactly difficult to figure out how much space you’ve got left when you’re using OpenZFS–but it is different from doing so on traditional filesystems, as OpenZFS brings considerably more complexity to the table. Space accounting in OpenZFS requires a different approach due to factors like snapshots, compression, and deduplication.
By the time we’re done today, we’ll understand:
-
how to use both filesystem-agnostic tools like du and df
-
OpenZFS-native tools like zfs list and zpool list.
OpenZFS brings new concepts to filesystem management that muddy this simple picture a bit: snapshots, inline compression, and block-level deduplication. To effectively manage our OpenZFS filesystem, we’ll need to begin by understanding three properties: USED, REFER, and AVAIL.
All three properties revolve around the status of logical sectors, not physical sectors.
Replication is an OpenZFS feature that really ups the data management game, providing a mechanism for handling a hardware failure with minimal data loss and downtime. Fortunately, replication itself is easy to configure and understand. In this article we’ll keep things simple, and practice replicating small amounts of data to a virtual machine.
zrep simplifies ZFS replication for the use-case we’re demonstrating. However, we’ll also discuss many concepts and aspects of ZFS that are not specific to zrep. zrep itself is not available from ports, but it only consists of a single shell script. The stable version of zrep needs ksh as its operating shell, though the newer version from GitHub can also use bash. The script needs to be installed in a directory that is included in $PATH on both systems. On FreeBSD, you may want to change the first line to point to the correct location of ksh or bash in /usr/local/bin.
Initially, it may also be useful to setup keys to allow password-less ssh logins for root between the two systems. This was covered in the article Introduction to ZFS Replication. You may later prefer an alternative to ssh and perhaps to use zfs allow to avoid using root but a familiar tool like ssh is convenient.
n ZFS, storage pools, or zpools, are the fundamental units of storage management. A zpool aggregates the capacity of physical devices into a single, logical storage space. All data, including datasets, snapshots, and volumes, is stored within zpools. The management of these storage pools is essential for ensuring that ZFS operates efficiently and reliably. The following sections describe how to create and destroy zpools, manage devices within a pool, and monitor the health of a zpool.
Creating and Destroying zpools
Creating a ZFS Storage Pool
To create a zpool, use the zpool create command. The syntax for creating a pool requires specifying a pool name and the devices that will be part of the pool. Below are several examples demonstrating different configurations.
The following sections describe how to identify and resolve problems with your ZFS file systems or storage pools:
- Determining If Problems Exist in a ZFS Storage Pool
- Reviewing zpool status Output
- System Reporting of ZFS Error Messages
ZFS automatically logs successful zfs and zpool commands that modify pool state information. This information can be displayed by using the zpool history command.
How about using FreeBSD as an Enterprise Storage solution on real hardware? This where FreeBSD shines with all its storage features ZFS included.
Today I will show you how I have built so called Enterprise Storage based on FreeBSD system along with more then 1 PB (Petabyte) of raw capacity. //
There are 4U servers with 90-100 3.5″ drive slots which will allow you to pack 1260-1400 Terabytes of data (with 14 TB drives). Examples of such systems are:
I would use the first one – the TYAN FA100 for short name.
The build has following specifications.
2 x 10-Core Intel Xeon Silver 4114 CPU @ 2.20GHz
4 x 32 GB RAM DDR4 (128 GB Total)
2 x Intel SSD DC S3500 240 GB (System)
90 x Toshiba HDD MN07ACA12TE 12 TB (Data)
2 x Broadcom SAS3008 Controller
2 x Intel X710 DA-2 10GE Card
2 x Power Supply
Price of the whole system is about $65 000 – drives included.
The Road to RAID-Z Expansion
Expanding storage capacity has long been a challenge for RAID-Z users. Traditionally, increasing the size of a RAID-Z pool required adding an entirely new RAID-Z vdev, often doubling the number of disks—an impractical solution for smaller storage pools with limited expansion options.
To address this, the FreeBSD Foundation funded the development of RAID-Z expansion, making it both practical and easy to implement. Led by Matt Ahrens, a ZFS co-creator, the feature underwent years of rigorous testing and refinement. Although the pandemic caused some delays, the project was feature complete in 2022. Additional integration steps followed, and the feature is now generally available in the OpenZFS.
Thank You for Your Support
After years of development, industry collaboration, infrastructure testing, and nearly $100,000 investment, we are so excited to see RAID-Z expansion in the recent release of OpenZFS 2.3. We’re also grateful to iXsystems for their efforts in finalizing and integrating this feature into OpenZFS.
This marks a significant milestone in the evolution of the ZFS filesystem and reinforces its position as a cutting-edge open source filesystem for modern storage use cases.
This development work happened because of your donations to the FreeBSD Foundation. We couldn’t have made this financial commitment without your help. Thank you to all our supporters, large and small.
ZFS is the last word in the filesystem period. Many administrators are confused about using it – because ZFS is more than a filesystem. It introduces many new concepts. This blog’s mission is to bring ZFS to more homes/companies and show that we don’t need any other filesystem.
The ZedFS gathers information and tutorials about ZFS. If you are curious about ZFS, this website will become your home.
Ok, but why ZedFS?
There is an endless discussion if we should pronounce ZFS as ZeeFS or ZedFS. The debate is so hot that even Michael W Lucas and Allan Jude (the authors of the FreeBSD Mastery: Advanced ZFS) disagreed on how we should pronounce ZFS. Because of that, there is a Canadian version of the “ZedFS FreeBSD Mastery.” (The story of the book). If you are from the ZeeFS camp, then ‘ed’ in the blog’s name is from EDucation.
In traditional file systems we use df(1) to determine free space on partitions. We can also use du(1) to count the size of the files in the directory. But it’s different on ZFS and this is the most confusing thing EVER. I always forget which tool reports what disk space usage! Every time somebody asks me, I need to google it. For this reason I decided to document it here – for myself – because if I can’t remember it at least I will not need to google it, as it will be on my blog, but maybe you will also benefit from this blog post if you have the same problem or you are starting your journey with ZFS.
The zfs list command provides an extensible mechanism for viewing and querying dataset information. Both basic and complex queries are explained in this section.
For one of my datasets zfs list
and df
show significantly different used numbers: //
Okay, zfs list -o space
reveals that it’s (a) snapshot(s):
zfs list -t snapshot myPool/myDataset
zrepl is a one-stop, integrated solution for ZFS replication.
Today we have a quick guide on how to automate Proxmox VE ZFS offsite backups to Rsync.net. The folks at rsync.net set us up with one of their ZFS enabled trial accounts. As the name might imply, Rsync.net started as a backup service for those using rsync. Since then they have expanded to allowing ZFS access for ZFS send/ receive which makes ZFS backups very easy. In our previous article we showed server-to-server backup using pve-zsync to automate the process.
With the introduction of SAS 12Gbps, seems like "it's time" to do a braindump on SAS.
Work in progress, as usual.
History
By the late '90's, SCSI and PATA were the dominant technologies to attach disks. Both were parallel bus multiple drop topologies and this kind of sucked. SATA and Serial Attached SCSI (SAS) evolved from those, using a serial bus and hub-and-spoke design.
Early SATA/150 and SATA/300 were a bit rough and had some issues, as did SAS 3Gbps. You probably want to avoid older controllers, cabling, expanders, etc. that doesn't support 6Gbps because some of it has "gotchas" in it. In particular a lot of it has 2TB size limitations. Most 3Gbps hard drives are fine though.
Similarities, Differences, Interoperability
SAS and SATA operate at the same link speeds and use similar cabling. SAS normally operates at a higher voltage than SATA and can run over longer cabling.
SAS and SATA use different connectors on the drive. The SATA drive connector has a gap between the signal and power sections, which allows separate power and data cables to be easily connected. The SAS drive connector does not have a gap, and instead has a second set of pins on top. This second set of pins is the second (redundant) SAS port. There are pictures of the top and the bottom of the drive connector.
SATA drives can be attached to a SAS port. Electrically, the SAS port is designed to allow attachment of a SATA drive, and will automatically run at SATA-appropriate voltages. Physically, the SAS backplane connector has an area that will allow either the gapless SAS or the gapped SATA connector to fit. See picture of SAS backplane socket.
SAS drives are incompatible with SATA ports, however, and a SATA connector will not attach to an SAS drive. Don't try. The gap is there to block a SAS drive from being connected to typical SATA cabling, or to a SATA backplane socket.
When a SATA drive is attached to a SAS port, it is operated in a special mode using the Serial ATA Tunneling Protocol (STP).
ZFS posts
1) An HBA is a Host Bus Adapter.
This is a controller that allows SAS and SATA devices to be attached to, and communicate directly with, a server. RAID controllers typically aggregate several disks into a Virtual Disk abstraction of some sort, and even in "JBOD" or "HBA mode" generally hide the physical device. If you cannot see the type of device (such as "ST6000DX000-1H217Z" in "camcontrol devlist", you DO NOT HAVE A TRUE HBA. If you cannot get the output of "smartctl" for a device, you DO NOT HAVE A TRUE HBA. A true HBA passes communications through itself directly to a drive without further processing. No amount of marketing department wishful thinking can change that technical reality.
2) FreeBSD has incredibly robust support for the LSI HBA's.
FreeBSD's LSI HBA (mps/mpr) drivers are authored by LSI and carefully designed to work with their HBA firmware. The FreeNAS userbase has installed many thousands of these cards which have, in aggregate, BILLIONS of problem-free run-hours. Not only are they known to work very well during normal operations, but they're also known to work correctly during ABNORMAL operations, such as when a disk times out or throws an error. SMART is properly supported. Forum members are incredibly familiar with all the variations on these and can provide useful assistance. Cards such as the LSI 9240-8i, IBM ServeRAID M1015, Dell PERC H200 and H310, and others are readily available on the used market and can be converted to LSI 9211-8i equivalents.
3) You must crossflash to IT/IR firmware
If you don't crossflash, then a lot of the remainder of this ALSO applies to LSI non-IT-20.00.07.00 HBA's!! The IR firmware is also fine but is a few percent slower. It is not clear there is any value to doing this as you would never want to use an IR virtual device with FreeNAS. We used to do this in the old days for boot devices, but with ZFS boot this is probably no longer relevant.
The LSI 9240 (etc) default MFI firmware is apparently being sold on eBay as "IR" by clueless sellers. The MFI firmware is unsuitable for FreeNAS and may cause your pool to get eaten.
The LSI 9211-8i (PCIe 2.0 based on LSI 6Gbps SAS2008) and LSI 9207-8i (PCIe 3.0 based on LSI 6Gbps SAS2308) both require firmware 20.00.07.00.
all the tags from https://b.plas.ml
1st-amendment 2nd-amendment 4th-amendment 5th-amendment 9/11 a8 abortion acl adhd afghanistan africa a/i air-conditioning amateur-radio amazon america american android animals anti-americanism antifa anti-semitism antiv antivirus aoip apollo apple appliances archaeology architecture archive art astronomy audio automation avatar aviation backup bash batteries belleville bible biden bill-of-rights biology bookmarks books borg bush business calibre camping capitalism cellphone censorship chemistry children china christianity church cia clinton cloud coldwar communication communist composed computers congress conservatives constitution construction cooking copyleft copyright corruption cosmology counseling creation crime cron crypto culture culture-of-death cummins data database ddt dd-wrt defense democrats depression desantis development diagrams diamonds disinformation diy dns documentation dokuwiki domains dprk drm drm-tpm drugs dvd dysautonomia earth ebay ebola ebook economics education efficiency electricity electronics elements elwa email energy engineering english environment environmentalism epa ethernet ethics europe euthanasia evolution faa facebook family fbi fcc feminism finance firewall flightsim flowers fonts français france fraud freebsd free-speech fun games gardening genealogy generation generators geography geology gifts git global-warming google gop government gpl gps graphics green-energy grounding hdd-test healthcare help history hollywood homeschool hormones hosting houses hp html humor hunting hvac hymns hyper-v imap immigration india infosec infotech insects instruments interesting internet investing ip-addressing iran iraq irs islam israel itec j6 journalism jumpcloud justice kindle kodi language ldap leadership leftist leftists legal lego lgbt liberia liberty linguistics linux literature locks make malaria malware management maps markdown marriage mars math media medical meshcentral metatek metric microbit microsoft mikrotik military minecraft minidisc missions moon morality mothers motorola movies mp3 museum music mythtv names nasa nature navigation navy network news nextcloud ntp nuclear obama ocean omega opensource organizing ortlip osmc oxygen paint palemoon paper parents passwords patents patriotism pdf petroleum pets pews photography photo-mgmt physics piano picasa plesk podcast poetry police politics pollution pornography pots prayer pregnancy presentations press printers privacy programming progressive progressives prolife psychology purchasing python quotes rabbits rabies racism radiation radio railroad reagan recipes recording recycling reference regulations religion renewables republicans resume riots rockets r-pi russia russiagate safety samba satellites sbe science sci-fi scotus secularism security servers shipping ships shooting shortwave signal sjw slavery sleep snakes socialism social-media software solar space spacex spam spf spideroak sports ssh statistics steampowered streaming supplement surveillance sync tarsnap taxes tck tds technology telephones television terrorism tesla theology thorium thumbnail thunderbird time tls tools toyota trains transformers travel trump tsa twitter typography ukraine unions united.nations unix ups usa vaccinations vangelis vehicles veracrypt video virtualbox virus vitamin vivaldi vlc voting vpn w3w war water weather web whatsapp who wifi wikipedia windows wordpress wuflu ww2 xigmanas xkcd youtube zfs