UPDATE EDIT:
Man it is crazy to watch the dashboard and console at the time. Even with no HDD’s spinning, and as much RAM as I can give the Scale VM, services just slowly takes over the RAM, until the console shows kernel panic.

core was solid for so long with everything i threw at it.

it runs out of memory after services soaks up all the RAM, ZFS cache is choked down to 3gb out of 16.

  • xeon E3 1265LV2
  • Asus p8z77-v-deluxe
  • 32GB DDR3
  • hba passed through to truenas running a mirror pool

VM for truenas is running on the local proxmox SSD.

  • proxmox 9.1.1
  • TrueNAS scale 25.10.0.1 but i tried a 24 version also

once the install starts crashing, the VM will still crash after booting up without the HBA card

I’ve seen a few posts with other people having the out of memory issues (OOM) but almost every reply says it will be fixed in the next update, which is older than what we’ve got now.

it did run okay enough JUST long enough to make the mistake of updating the ZFS flags, so now i can’t roll back to core.

does scale have this issue because it’s virtualized? would it run better on bare metal?

anyone tried xigmaNAS? freeBSD based again at least.

Unraid looks okay, but paywall?

open media vault?

any advice or discussion is appreciated!

  • SapphironZA@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    4
    ·
    edit-2
    9 hours ago

    By exprience with Trunas has been that ZFS does not like virtual disks. Especially when the Proxmox host also uses ZFS. Two layers of ZFS arc caching creates some memory issues. Setting the Host datasets to Metadata only may help.

    But the most reliable method would be doing hardware passthrough of physical disks to the VM. It gets you most of the bare metal reliability benefits without having to commit the entire hardware box to one OS.

    You may also want to disable memory ballooning in your VMs. It works well when you have lots of small VMs, but if you have a few large ones, it can cause issues if you overallocate Ram to VMs, beyond what the OS has available. I suspect it could also be interferring the zfs arc as well.

    Lastly, check that your VM is set to use the “host” Cpu type. Freenas would likely benefit from having access to more CPU functions.

  • monkeyman512@lemmy.world
    link
    fedilink
    English
    arrow-up
    3
    ·
    19 hours ago

    I have TN Scale VM hosted in Proxmox. The only “issue” I have is the webgui gets pushed to SWAP if not used for more than a week. So when I connect it it literally takes a couple minutes while is gets shuffled back into RAM. Once it’s “warmed up” it’s fine. But my Scale VM is doing these things: manage ZFS pools, control NFS/Samba shares, replicate pool snapshots to off-site backup server. It intentionally have it do nothing else. All other services are in different VMs or LXC containers in Proxmox.

    Does your Scale install have any SWAP space setup? That should prevent out of memory issues. Potential performance issues would be better than crashing.

  • TerHu@lemmy.dbzer0.com
    link
    fedilink
    English
    arrow-up
    11
    ·
    1 day ago

    I’ve used truenas scale on an old xeon with 32gb ram and then moved it over to proxmox on an i5-12600 with 64gb ddr5. truenas is installed to a virtual drive provided by proxmox, but all the other drives are sata passthrough and truenas handles the bare metal. the truenas vm has eight cores, 32gb ram and is running scale 25.10.0.1. so far i’ve got four sata ssds attached to it and am running 20+ apps without issues.

    i know this doesn’t help you much besides ensuring that it does actually work within proxmox 9.0.11

    good luck!

      • TerHu@lemmy.dbzer0.com
        link
        fedilink
        English
        arrow-up
        1
        ·
        13 hours ago

        btw, i tried to add two HDDs today and sata passthrough didn’t allow me to create a new pool despite them showing up in truenas. something about duplicate serial numbers and such. i then decided to pass through my cpus sata controller (proxmox and the truenas virtual boot drive run off nvme). rebooted proxmox and it worked. all drives detected and functional (after removing their individual passthrough because proxmox couldn’t find them as it didn’t have access to the sata controller anymore)

  • rockyracoon@lemmy.world
    link
    fedilink
    English
    arrow-up
    4
    ·
    22 hours ago

    I’ve had a good experience with TrueNAS scale on bare metal and proxmox in a VM. I don’t have a lot of heavy weight stuff on proxmox though, mainly LXC containers and a few VMs. I also have a VM in TrueNAS for jellyfin that I pass a GPU into. It’s been running that way for several years without issue.

    • fleem@piefed.zeromedia.vipOP
      link
      fedilink
      English
      arrow-up
      1
      ·
      15 hours ago

      truenas core was my first bare metal install! ran awesomely for about a year until i realized i wanted to do more than the freebsd jails were letting me.

      and the core install on a proxmox VM was seemingly bulletproof! I kick myself for wanting a syncthing setup without a bunch of mounts and shares. I was elated at the idea that truenas was finally just another debian machine, and my buddy scared me by letting me know that core was on maintenance mode. so i pulled the trigger.

      i am debating wiping proxmox and letting this machine be scale on bare metal, but i would have to figure out what to do with immich and jellyfin in the meantime.

  • snekerpimp@lemmy.world
    link
    fedilink
    English
    arrow-up
    5
    ·
    24 hours ago

    Have you made sure your ram modules are all good? Was getting funky behavior with truenas in a proxmox vm, come to find out one of my 32gb sticks was bad. Removed the stick, no funky behavior. Replaced with a different stick, been solid since, maybe a year and a half on now.

    • fleem@piefed.zeromedia.vipOP
      link
      fedilink
      English
      arrow-up
      1
      ·
      17 hours ago

      This is what i am hoping to avoid! But no not yet. Does debian/proxmox have a built in memtest? or is this through bios? OR do i just start removing/swapping RAM?

      • snekerpimp@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        15 hours ago

        I have NetBoot.xyz on my network for ipxe booting. They have memtest as a boot image, ran that for a few days. I don’t remember if it found the stick that was bad or if I just started pulling and booting and process of elimination.

  • PeriodicallyPedantic@lemmy.ca
    link
    fedilink
    English
    arrow-up
    6
    ·
    1 day ago

    Not practical advice, but since you’re throwing out alternatives, one that I’ve had my eye on is hexOS.
    It’s a trueNAS scale (fork?) by a bunch of ex-unraid devs with the goal of making trueNAS as easy as unraid.

    It still has a few more months of beta, though, so I haven’t tried it. Also, like unraid, it has a paywall.

    • fleem@piefed.zeromedia.vipOP
      link
      fedilink
      English
      arrow-up
      1
      ·
      15 hours ago

      heck if i had enough money for paywalls, i’d be donating to: shit i was typing up a whole list before realizing i was basically just announcing my attack surface to the webz.

      suffice to say i’d deffa be throwing the open source devs some love!

      • PeriodicallyPedantic@lemmy.ca
        link
        fedilink
        English
        arrow-up
        2
        ·
        6 hours ago

        Yeah fair.

        It’s also kinda a fuzzy area, because some of their code is open source, but not all.
        And iirc they’re partly funded by the folks who make trueNAS, but supporting a project supported by open-source folks isn’t the same as supporting open-source folks themselves.

        I figured it toss it out there anyways though.

  • Suzune@ani.social
    link
    fedilink
    English
    arrow-up
    5
    arrow-down
    1
    ·
    24 hours ago

    Next time, when you make major changes like ZFS upgrade, create a checkpoint and keep it for a while. You can roll back everything, even the pool version.

    I personally like to run ZFS on a bare metal server, just the plain OS, no further “NAS” or virtualization software.

    I don’t really know what your use cases are, so I cannot tell if it’s adequate for you.

    • fleem@piefed.zeromedia.vipOP
      link
      fedilink
      English
      arrow-up
      1
      ·
      15 hours ago

      One thing I can be certain of, is that I barely know enough to have this stuff going!

      Will learn more about ZFS checkpoints! thanks for the tip

  • tvcvt@lemmy.ml
    link
    fedilink
    English
    arrow-up
    2
    ·
    22 hours ago

    I’m not sure what it is, but Scale has never thrilled me. I’ve tested it a couple times and I just didn’t get along well with it. I’ve tested know Jim Salter (practicalzfs.com) has frequently recommended XigmaNAS as a strong (albeit less pretty) alternative to TrueNAS. I did some tests with that as well and it seemed perfectly fine. In the end I decided that when I migrate off of Core this winter, it’ll be to a bare metal FreeBSD system. I’m using it as an excuse to better learn that ecosystem and to bone up on ansible, which I’m using to define all of my settings.