• 1 Post
  • 32 Comments
Joined 6 days ago
cake
Cake day: January 6th, 2026

help-circle
  • USB enclosures tend to be less reliable compared to SATA in general but I think that is just FUD. It’s not like that’s particularly bad for software RAID compared to running with the enclosure without any RAID.

    The main argument for not doing that is I believe mechanical: Having more moving parts mean things might, well, move, unseating cables and leading to janky connections and possibly resulting failure.

    You will kill your USB controller, and/or the IO boards in the enclosures

    wat.jpeg

    Source: 10+ years of ZFS and mdadm RAID on USB-SATA adapters of varying dodginess in harsh environments. Of course errors happen (99% it’s either a jiggly cable, buggy firmware/driver, or your normal drive failure) but nothing close to what you speak of.

    Your hardware is not going to become damaged from doing software RAID over USB.

    That aside, the whole project of buying new 4TB HDDs for a laptop today just seems misguided. I know times are tight but JFC why not get either SSDs or bigger drives instead, or if nothing else at least a proper enclosure.


  • If you consider ZFS and don’t mind having the machine offline for a day or two you could fill it up with real (backups!) or a bunch of representative fake data and run some tests/benchmarks before you fully commit. It depends a lot on how the data is structured and what you’re running on it and it’s possible it will run fine.


  • The OP is about hosting forwarding or recursive DNS for lookups, not authoritatative DNS hosting (which would be yet at least one separate server).

    I count two servers (one clusterable for HA). How is that a lot for a small LAN?

    More would also be normal for serving one domain internally and publicly. Each of these can be separate:

    • Internal authoriative for internal domain
    • Internal resolvers for internal machines
    • Internal source-of-truth for serving your zone publicly (may or may not be an actual DNS server)
    • Public-facing authoritative for your zone serving the above
    • Secondary for the above
    • Recursing resolver of external domains for internal use

    Some people then add another forwarding resolver like dnsmasq each server.


  • It seems the DHCP is handing out the fire wall’s ip for DNS server, 100.100.100.1 is that the expected behavior since DNSmasq should be forwarding to TDNS 100.100.100.333. Why not just hand out the TDNS address?

    You could and that should work but then it’s not called forwarding anymore. It does forwarding because that’s what you configured. Both approaches are valid.

    I have an opnsense firewall with DNSmasq performing DHCP and DNS forwarding to the Technitium server














  • On 1: Autoseeding ISOs over bittorrent is pretty easy, helps strengthening and decentralize community distribution, and makes sure you already have the latest stable locally when you need it.

    While a bit more resource intensive (several 100GB), running a full distribution package mirror is very nice if you can justify it. No more waiting for registry sync and package downloads on installs and upgrades. apt-mirror if you are curious.

    Otherwise, apt-cacher-ng will at least get you a seamless shared package cache on the local network. Not as resilient but still very helpful in outage scenarios if you have more than one machine with the same dist. Set one to autoupgrade with unattended-upgrades and the packages should be available for the rest, too.



  • I am currently trying to transition from docker-compose to podman-compose before trying out podman quadlets eventually.

    Just FYI and not related to your problem, you can run docker-compose with podman engine. You don’t need docker engine installed for this. If podman-compose is set up properly, this is what it does for you anyway. If not, it falls back to an incomplete Python hack. Might as well cut out the middle-man.

    systemctl --user enable --now podman  
    DOCKER_HOST=unix://${XDG_RUNTIME_DIR}/podman/podman.sock docker-compose up  
    

  • kumi@feddit.onlinetoSelfhosted@lemmy.worldPodman Linkding Issues
    link
    fedilink
    English
    arrow-up
    1
    ·
    edit-2
    3 days ago

    I think Mora is on the ball but we’d need their questions answered to know.

    One possibility is that you have SELinux enabled. Check by sudo getenforce. The podman manpage explains a bit about labels and shares for mounts. Read up on :z and :Z and see if appending either to the volumes in your compose file unlocks it.

    If running rootless, your host user also obviously needs be able to access it.