If you consider ZFS and don’t mind having the machine offline for a day or two you could fill it up with real (backups!) or a bunch of representative fake data and run some tests/benchmarks before you fully commit. It depends a lot on how the data is structured and what you’re running on it and it’s possible it will run fine.
- 1 Post
- 32 Comments
kumi@feddit.onlineto
Selfhosted@lemmy.world•DNS kicking my ass (Technitium and opnsense)English
1·3 hours agoThe OP is about hosting forwarding or recursive DNS for lookups, not authoritatative DNS hosting (which would be yet at least one separate server).
I count two servers (one clusterable for HA). How is that a lot for a small LAN?
More would also be normal for serving one domain internally and publicly. Each of these can be separate:
- Internal authoriative for internal domain
- Internal resolvers for internal machines
- Internal source-of-truth for serving your zone publicly (may or may not be an actual DNS server)
- Public-facing authoritative for your zone serving the above
- Secondary for the above
- Recursing resolver of external domains for internal use
Some people then add another forwarding resolver like dnsmasq each server.
kumi@feddit.onlineto
Selfhosted@lemmy.world•DNS kicking my ass (Technitium and opnsense)English
3·3 hours agoIt seems the DHCP is handing out the fire wall’s ip for DNS server, 100.100.100.1 is that the expected behavior since DNSmasq should be forwarding to TDNS 100.100.100.333. Why not just hand out the TDNS address?
You could and that should work but then it’s not called forwarding anymore. It does forwarding because that’s what you configured. Both approaches are valid.
I have an opnsense firewall with DNSmasq performing DHCP and DNS forwarding to the Technitium server
I suspect this machine might be memory constrained and if so zfs might push it to its limits if it’s already close.
If it has <8G and doesn’t already have decent headroom I’d think twice about ZFS depending on how its going to be used
I worry I could be risking data corruption or something swapping to this setup
I really hope this is just a turn of speech and you’re not actually planning to put swap on those HDDs
kumi@feddit.onlineto
Selfhosted@lemmy.world•nvidia devices not appearing on boot (proxmox)English
5·19 hours agoNope but I guess a workaround would be to make a oneshot
workaround-nvidia-gpu.servicesystemd unit file that runs the command and have the lxc autostart depend on it?Might be something about PCI resets that running the command triggers 🤷♀️
kumi@feddit.onlineto
Selfhosted@lemmy.world•Stop using MySQL in 2026, it is not true open sourceEnglish
61·20 hours agoOperating and securing Postgres is a steeper learning curve. MariaDB is more forgiving for best-effort shoestring setups without compensating scalability for it.
As a dev I’m agnostic, as an owner and computer scientiest I prefer Postgres, as a sysadmin or *Ops I will put my hand up for MariaDB any day if I’ll be on call or maintain deployments.
kumi@feddit.onlineto
Selfhosted@lemmy.world•Cheapest way to back up a *lot* of data?English
4·23 hours agoYou can replicate across more than one provider and do automated regular monitoring that backups are still accessible.
If one goes down you hopefully have time to figure out a replacment before the other(s) do.
Probably not worth it for a bunch of xvid dvdrips or historical archives of full system-level backups but for critical data it’s sensible.
kumi@feddit.onlineto
Selfhosted@lemmy.world•How are people discovering random subdomains on my server?English
4·2 days agoWhat you can do is segregate networks.
If the browser runs in, say, a VM with only access to the intranet and no internet access at all, this risk is greatly reduced.
LVM itself does not provide redundancy, that’s RAID.
I think this is potentially a bit confusing.
LVM does provide RAID functionality and can be used to set up and manage redundant volumes.
See
--typeand--mirrorunderman 8 lvcreate.
kumi@feddit.onlineto
Selfhosted@lemmy.world•How are people discovering random subdomains on my server?English
7·2 days agoMy next suspicion from what you’ve shared so far apart from what others suggested would be something out of the http server loop.
Have you used some free public DNS server and inadvertently queried it with the name from a container or something? Developer tooling building some app with analytics not disabled? Any locally connected AI agents having access to it?
kumi@feddit.onlineto
Selfhosted@lemmy.world•How are people discovering random subdomains on my server?English
131·2 days agoYou say you have a wildcard cert but just to make sure: I don’t suppose you’ve used ACME for Letsencrypt or some other publicly trusted CA to issue a cert including the affected name? If so it will be public in Certificate Transparency Logs.
If not I’d do it again and closely log and monitor every packet leaving the box.
kumi@feddit.onlineto
Selfhosted@lemmy.world•Self-hosting in 2026 isn't about privacy anymore - it's about building resistance infrastructureEnglish
11·2 days agodeleted by creator
kumi@feddit.onlineto
Selfhosted@lemmy.world•Affordable Raspberry Pi solution for daily Google Slides presentationsEnglish
1·3 days agodeleted by creator
kumi@feddit.onlineto
Selfhosted@lemmy.world•Self-hosting in 2026 isn't about privacy anymore - it's about building resistance infrastructureEnglish
2·3 days agoIf anyone else is seeing high resource use from seeding: There’s quite some spam and griefing happening to at least Debian and Arch trackers and DHT.
Blocking malicious peers can cut down that by a lot. PeerBanHelper is like a spam filter for torrent clients.
https://github.com/PBH-BTN/PeerBanHelper/blob/dev/README.EN.md
kumi@feddit.onlineto
Selfhosted@lemmy.world•Self-hosting in 2026 isn't about privacy anymore - it's about building resistance infrastructureEnglish
3·3 days agoOn 1: Autoseeding ISOs over bittorrent is pretty easy, helps strengthening and decentralize community distribution, and makes sure you already have the latest stable locally when you need it.
While a bit more resource intensive (several 100GB), running a full distribution package mirror is very nice if you can justify it. No more waiting for registry sync and package downloads on installs and upgrades.
apt-mirrorif you are curious.Otherwise,
apt-cacher-ngwill at least get you a seamless shared package cache on the local network. Not as resilient but still very helpful in outage scenarios if you have more than one machine with the same dist. Set one to autoupgrade withunattended-upgradesand the packages should be available for the rest, too.
kumi@feddit.onlineto
Selfhosted@lemmy.world•Self-hosting in 2026 isn't about privacy anymore - it's about building resistance infrastructureEnglish
32·3 days agoYes, Home Assistant has this.
https://rhasspy.readthedocs.io/en/latest/
Works great. My biggest challenge was finding a decent microphone setup and ended up like many do with old Playstation 3 webcams. That was a while back and I would guess it’s easier to find something more appropriate today.
I am currently trying to transition from docker-compose to podman-compose before trying out podman quadlets eventually.
Just FYI and not related to your problem, you can run docker-compose with podman engine. You don’t need docker engine installed for this. If podman-compose is set up properly, this is what it does for you anyway. If not, it falls back to an incomplete Python hack. Might as well cut out the middle-man.
systemctl --user enable --now podman DOCKER_HOST=unix://${XDG_RUNTIME_DIR}/podman/podman.sock docker-compose up
I think Mora is on the ball but we’d need their questions answered to know.
One possibility is that you have SELinux enabled. Check by
sudo getenforce. The podman manpage explains a bit about labels and shares for mounts. Read up on:zand:Zand see if appending either to thevolumesin your compose file unlocks it.If running rootless, your host user also obviously needs be able to access it.
USB enclosures tend to be less reliable compared to SATA in general but I think that is just FUD. It’s not like that’s particularly bad for software RAID compared to running with the enclosure without any RAID.
The main argument for not doing that is I believe mechanical: Having more moving parts mean things might, well, move, unseating cables and leading to janky connections and possibly resulting failure.
wat.jpeg
Source: 10+ years of ZFS and mdadm RAID on USB-SATA adapters of varying dodginess in harsh environments. Of course errors happen (99% it’s either a jiggly cable, buggy firmware/driver, or your normal drive failure) but nothing close to what you speak of.
Your hardware is not going to become damaged from doing software RAID over USB.
That aside, the whole project of buying new 4TB HDDs for a laptop today just seems misguided. I know times are tight but JFC why not get either SSDs or bigger drives instead, or if nothing else at least a proper enclosure.