30 Jan 2023 |
guntherdw | But, in my experience QEMU doesn't fully "hog" the RAM you give to a VM | 21:01:52 |
guntherdw | So even if you give a VM 16GB at bootup if it's not even caching any files or other resources it will not show up as 16GB in htop on the host machine | 21:02:24 |
Julien | In reply to @guntherdw:wrongplace.be https://medium.com/coccoc-engineering-blog/the-way-of-kvm-memory-management-9670f0f852af oh so there's Ovirt to do that auto ballooning | 21:02:36 |
Julien | And that's also one thing I like about LXC: you don't have to pre-allocate memory. In my case I don't put any limits on resources so all my containers have access to the entire system. I don't have to allocate N cores to a VM, instead my container has access to all my 32threads. | 21:05:02 |
Julien | I can then put QoS in place to prioritize some containers | 21:05:13 |
Julien | I could put limits on resources of course, but in my case I don't need to | 21:05:27 |
guntherdw | In ESX you can do such a thing as well, but it's a bit less fine grained | 21:05:53 |
guntherdw | The bad part about memory ballooning though in ESX that I wasn't aware it was doing that because it's not always clear it's doing it until you actually check with vmware-toolbox-cmd | 21:07:52 |
guntherdw | It just shows up as "memory in use" but no actual process taking up said memory | 21:08:14 |
guntherdw | One of my VM's was somewhat misconfigured and was getting OOM's left right and center randomly, and it took me quite a while to figure out it was other VM's on the same host causing it | 21:09:17 |
Julien | In reply to @julien:jroy.ca And that's also one thing I like about LXC: you don't have to pre-allocate memory. In my case I don't put any limits on resources so all my containers have access to the entire system. I don't have to allocate N cores to a VM, instead my container has access to all my 32threads. though I guess that could allow a compromised container to drain the system for everyone else | 21:09:21 |
guntherdw | Well, not directly, but my memory was just a tiny bit too much under pressure | 21:10:17 |
Julien | Heh. I had this problem a lot on my previous server | 21:10:40 |
guntherdw | I was under the impression I had more than enough still available by looking at the "RAM in use" counter in my vsphere UI, apparently that was a bit misleading | 21:10:44 |
Julien | Had only 24GB of RAM and I crammed too many VMs in it | 21:10:51 |
Julien | I was way over-comitted in fact after a reboot I needed to wait for the hypervisor to reclaim memory from the booted VMs so the other ones could boot 🤣 | 21:11:23 |
guntherdw |  Download Screenshot_20230130_221212.png | 21:12:25 |
guntherdw | I really need to get more memory in that one box xD | 21:12:32 |
guntherdw | It's not one that I run for a corp or anything, just my ~10 y old homeserver running a couple services like DNS, mail etc | 21:13:28 |
Julien | am I reading this right? you have 4GB on the host? | 21:13:23 |
guntherdw | In reply to @guntherdw:wrongplace.be I really need to get more memory in that one box xD Especially now that Devuan initramfs's are getting quite bloated and require quite a bit of RAM to boot up and not OOM | 21:17:47 |
Julien |  Download clipboard.png | 21:17:44 |
Julien | here is mine now | 21:17:49 |
guntherdw | When it's running it's fine, but that initramfs, good lord | 21:18:03 |
Julien | 64 is more than enough for now :D | 21:18:02 |
guntherdw | I used to be able to run my ZNC VM off of like less than 128MB of RAM | 21:18:19 |
Julien | gee | 21:18:42 |
Julien | how did that work | 21:19:10 |
Julien | I mean my kernel alone takes more than that | 21:19:32 |
Julien | you mean 128MB allocated to the VM ? | 21:20:12 |