Curious to know what the experiences are for those who are sticking to bare metal. Would like to better understand what keeps such admins from migrating to containers, Docker, Podman, Virtual Machines, etc. What keeps you on bare metal in 2025?

  • atzanteol@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    2
    ·
    4 months ago

    Containers run on “bare metal” in exactly the same way other processes on your system do. You can even see them in your process list FFS. They’re just running in different cgroup’s that limit access to resources.

    Yes, I’ll die on this hill.

    • sylver_dragon@lemmy.world
      link
      fedilink
      English
      arrow-up
      2
      ·
      4 months ago

      But, but, docker, kubernetes, hyper-scale convergence and other buzzwords from the 2010’s! These fancy words can’t just mean resource and namespace isolation!

      In all seriousness, the isolation provided by containers is significant enough that administration of containers is different from running everything in the same OS. That’s different in a good way though, I don’t miss the bad old days of everything on a single server in the same space. Anyone else remember the joys of Windows Small Business Server? Let’s run Active Directory, Exchange and MSSQL on the same box. No way that will lead to prob… oh shit, the RAM is on fire.

      • AtariDump@lemmy.world
        link
        fedilink
        English
        arrow-up
        2
        ·
        edit-2
        4 months ago

        …oh shit, the RAM is on fire.

        The RAM. The RAM. The 🐏 is on fire. We don’t need no water let the mothefuxker burn.

        Burn mothercucker, burn.

        (Thanks phone for the spelling mistakes that I’m leaving).

  • CompactFlax@discuss.tchncs.de
    link
    fedilink
    English
    arrow-up
    2
    ·
    edit-2
    8 days ago

    Pure bare metal is crazy to me. I run proxmox and mount my storage there, and from there it is shared to machines that need it. It would be convenient to do a pass through to TrueNAS, for some of the functions it provides but I don’t trust that my skills for that. I’d have kept TrueNAS on bare metal, but I need so little horsepower for my services that it would be a waste. I don’t think the trade offs of having TrueNAS run my virtualisation environment were really worth it.

    My router is bare metal. It’s much simpler to handle the networking with a single physical device like that. Again, it would be convenient to set up opnsense in a VM for failover. but it introduces a bunch of complexity I don’t want or really need. The router typically goes down only for maintenance, not because it crashed or something. I don’t have redundant power or ISPs either.

    To me, docker is an abstraction layer I don’t need. VMs are good enough, and proxmox does a good job with LXCs so far.

    Why would I spin up a VM and virtual network within that vm and then a container when I can just spin up a VM?

    I’ve not spent time learning Docker or k8s; it seems very much a tool designed for a scale that most companies don’t operate at let alone my home lab.

  • sepi@piefed.social
    link
    fedilink
    English
    arrow-up
    1
    ·
    4 months ago

    “What is stopping you from” <- this is a loaded question.

    We’ve been hosting stuff long before docker existed. Docker isn’t necessary. It is helpful sometimes, and even useful in some cases, but it is not a requirement.

    I had no problems with dependencies, config, etc because I am familiar with just running stuff on servers across multiple OSs. I am used to the workflow. I am also used to docker and k8s, mind you - I’ve even worked at a company that made k8s controllers + operators, etc. I believe in the right tool for the right job, where “right” varies on a case-by-case basis.

    tl;dr docker is not an absolute necessity and your phrasing makes it seem like it’s the only way of self‐hosting you are comfy with. People are and have been comfy with a ton of other things for a long time.

    • kiol@lemmy.worldOP
      link
      fedilink
      English
      arrow-up
      0
      arrow-down
      1
      ·
      4 months ago

      Question is totally on purpose, so that you’ll fill in what it means to you. The intention is to get responses from people who are not using containers, that is all. Thank you for responding!

  • Strider@lemmy.world
    link
    fedilink
    English
    arrow-up
    1
    ·
    4 months ago

    Erm. I’d just say there’s no benefit in adding layers just for the sake of it.

    It’s just different needs. Say I have a machine where I run a dedicated database on, I’d install it just like that because as said there’s no advantage in making it more complicated.

  • nucleative@lemmy.world
    link
    fedilink
    English
    arrow-up
    1
    ·
    4 months ago

    I’ve been self-hosting since the '90s. I used to have an NT 3.51 server in my house. I had a dial in BBS that worked because of an extensive collection of .bat files that would echo AT commands to my COM ports to reset the modems between calls. I remember when we had to compile the slackware kernel from source to get peripherals to work.

    But in this last year I took the time to seriously learn docker/podman, and now I’m never going back to running stuff directly on the host OS.

    I love it because I can deploy instantly… Oftentimes in a single command line. Docker compose allows for quickly nuking and rebuilding, oftentimes saving your entire config to one or two files.

    And if you need to slap in a traefik, or a postgres, or some other service into your group of containers, now it can be done in seconds completely abstracted from any kind of local dependencies. Even more useful, if you need to move them from one VPS to another, or upgrade/downgrade core hardware, it’s now a process that takes minutes. Absolutely beautiful.

  • kutsyk_alexander@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    edit-2
    4 months ago

    I use Raspberry Pi 4 with 16GB SD-card. I simply don’t have enough memory and CPU power for 15 separate database containers for every service which I want to use.

  • ZiemekZ@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    4 months ago

    I consider them unnecessary layers of abstraction. Why do I need to fiddle with Docker Compose to install Immich, Vaultwarden etc.? Wouldn’t it be simpler if I could just run sudo apt install immich vaultwarden, just like I can do sudo apt install qbittorrent-nox today? I don’t think there’s anything that prohibits them from running on the same bare metal, actually I think they’d both run as well as in Docker (if not better because of lack of overhead)!

    • boonhet@sopuli.xyz
      link
      fedilink
      English
      arrow-up
      0
      arrow-down
      1
      ·
      4 months ago

      Both your examples actually include their own bloat to accomplish the same thing that Docker would. They both bundle the libraries they depend on as part of the build

  • layzerjeyt@lemmy.dbzer0.com
    link
    fedilink
    English
    arrow-up
    0
    ·
    4 months ago

    Every time I have tried it just introduces a layer of complexity I can’t tolerate. I have struggled to learn everything required to run a simple Debian server. I don’t care what anyone says, docker is not simpler or easier. Maybe it is when everything runs perfectly but they never do so you have to consider the eventual difficulty of troubleshooting. And that would be made all the more cumbersome if I do not yet understand the fundamentals of Linux system.

    However I do keep a list of packages I want to use that are docker-only. So if one day I feel up to it I’ll be ready to go.

  • mesa@piefed.social
    link
    fedilink
    English
    arrow-up
    0
    ·
    edit-2
    4 months ago

    All my services run on bare metal because its easy. And the backups work. It heavily simplifies the work and I don’t have to worry about things like a virtual router, using more cpu just to keep the container…contained and running. Plus a VERY tiny system can run:

    1. Peertube
    2. GoToSocial + client
    3. RSS
    4. search engine
    5. A number of custom sites
    6. backups
    7. Matrix server/client
    8. and a whole lot more

    Without a single docker container. Its using around 10-20% of the RAM and doing a dd once in a while keeps everything as is. Its been 4 years-ish and has been working great. I used to over-complicate everything with docker + docker compose but I would have to keep up with the underlining changes ALL THE TIME. It sucked, and its not something I care about on my weekends.

    I use docker, kub, etc…etc… all at work. And its great when you have the resources + coworkers that keep things up to date. But I just want to relax when I get home. And its not the end of the world if any of them go down.

    • Auli@lemmy.ca
      link
      fedilink
      English
      arrow-up
      0
      ·
      4 months ago

      Oh so the other 80% of your RAM can sit there and do nothing? My RAM is always around 80% or so as its caching stuff like it’s supposed to.

          • mesa@piefed.social
            link
            fedilink
            English
            arrow-up
            0
            arrow-down
            1
            ·
            4 months ago

            Welp OP did ask how we set it up. And for a family instance its good enough. The ram was extra that came with the comp. I have other things to do than optimize my family home server. There’s no latency at all already.

            It spikes when peertube videos are uploaded and transcoded + matrix sometimes. Have a good night!

    • Miaou@jlai.lu
      link
      fedilink
      English
      arrow-up
      0
      arrow-down
      1
      ·
      4 months ago

      Assuming you run Synapse, that uses more than 1.5GB RAM just idling, your system has at the very least 16GB of RAM… Hardly what I’d call “very tiny”

      • mesa@piefed.social
        link
        fedilink
        English
        arrow-up
        1
        ·
        edit-2
        4 months ago

        …ok so Im lying about my system for…some reason?

        Synapse looks like its using 200M right now. It jumps to 1 GB when being heavily used, but I only use it for piefed and a couple of other local rooms. Honestly its not doing so much for us so we were thinking of getting rid of it. Its irritating to keep having to set up new devices and no one is really using it.

        Peertube is much bigger running around 500MB just doing its thing.

        Its a single family instance.

        # ps -eo user,pid,ppid,cmd,pmem,rss --no-headers --sort=-rss | awk '{if ($2 ~ /^[0-9]+$/ && $6/1024 >= 1) {printf "PID: %s, PPID: %s, Memory consumed (RSS): %.2f MB, Command: ", $2, $3, $6/1024; for (i=4; i<=NF; i++) printf "%s ", $i; printf "\n"}}'  
        PID: 2231, PPID: 1, Memory consumed (RSS): 576.67 MB, Command: peertube 3.6 590508 
        PID: 2228, PPID: 1, Memory consumed (RSS): 378.87 MB, Command: /var/www/gotosocial/gotosoc 2.3 387964 
        PID: 2394, PPID: 1, Memory consumed (RSS): 189.16 MB, Command: /var/www/synapse/venv/bin/p 1.1 193704 
        PID: 678, PPID: 1, Memory consumed (RSS): 52.15 MB, Command: /var/www/synapse/livekit/li 0.3 53404 
        PID: 1917, PPID: 645, Memory consumed (RSS): 45.59 MB, Command: /var/www/fastapi/venv/bin/p 0.2 46680 
        
  • kiol@lemmy.worldOP
    link
    fedilink
    English
    arrow-up
    0
    arrow-down
    1
    ·
    4 months ago

    Are you concerned about your self-hosted bare metal machine being a single point of failure? Or, are you concerned it will be difficult to reproduce?