Hi all, I’m playing around with LVMs to expand data storage and I’m looking at what would be required to transfer those drives to another device, all the steps I can find require exporting the volume group and then importing on the other device. But what would be the case if your boot drive were to fail, and you needed to move the drives without being able to export the volume group. Can you just do an import with a new device, or are there other steps required to do so?
Secondly, is there a benefit to creating an LVM volume with a btrfs filesystem vs just letting btrfs handle it?
LVM itself does not provide redundancy, that’s RAID. LVM is often used on top of a RAID device. If your boot drive fails, LVM itself won’t save you, RAID (software RAID 1 is really common for a boot drive) can.
LVM can be used to seamlessly move data between physical volumes. You can add a new PV to the VG and move extents between LVs. I’ve used it to love-migrate to a larger drive that way. Once the physical extents have been moved to the new PV, you can reduce the old PV and then remove the old disk.
LVM itself does not provide redundancy, that’s RAID.
I think this is potentially a bit confusing.
LVM does provide RAID functionality and can be used to set up and manage redundant volumes.
See
--typeand--mirrorunderman 8 lvcreate.Correct, however basically no one uses that. The MD (RAID) devices are much more common for that, including under boot drives.
See comparison on ServerFault.
Secondly, is there a benefit to creating an LVM volume with a btrfs filesystem vs just letting btrfs handle it?
Like, btrfs on top of LVM versus btrfs? Well, the latter gives you access to LVM features. If you want to use lvmcache or something, you’d want it on LVM.
That’s kind of what I figured, my biggest concern at this point would be how difficult it is to move the lvm volume to a different device. It seems pretty straightforward if you have a working setup, but from what I’m seeing in my research is silence if your device (server, etc.) dies and you need to move those volumes to another. I’m finding guides on either recovering from corruption or lost metadata, or transferring from one working device to another. Nothing I can find about importing a fully functional lvm to a new device if it hasn’t been exported.
I’ll be able to do that later today, so I guess I’ll see what happens if I do. Better to try it out now when it isn’t critical.
If you’re talking about just moving the physical volumes (as in the actual hard drives) as is to another computer they’re automatically scanned and ready to go in majority of modern distributions. No need to export/import anything. This is obviously assuming your boot drive isn’t a part of volume group and you have healthy drives at your hands. You can test this with any live-distribution, just boot from USB into a new operating system and verify your physical volumes/volume groups from that.
If you want to move the volume group to a new set of disks simplest way would be to add physical drive(s) to volume group and then removing the old drive(s) from it after data has been copied. Search for pvmove and vgreduce. This obviously requires a working system, if your data drive has already failed it’s a whole another circus.
@Cenzorrll For comparison of LVM with BTRFS there are several article available.
https://www.baeldung.com/linux/btrfs-lvm
https://fedoramagazine.org/choose-between-btrfs-and-lvm-ext4/From personal experience, I have an encrypted software RAID1 with mdadm and BTRFS on top.
Is not LVM, but same direction.
Before implementing this, I made some tests.
Related to encryption, when RAID1 was implemented with BTRFS, the CPU load had been doubled, because every BTRFS disk has an encryption process.
With software RAID1, only one encryption process is there.




