mastodon.online is one of the many independent Mastodon servers you can use to participate in the fediverse.
A newer server operated by the Mastodon gGmbH non-profit

Server stats:

10K
active users

#Proxmox

52 posts46 participants0 posts today

New blogpost: "Trialling a move from a NUC to a container"

I spent some time today setting up a container to run my webserver and related tools (including mkdocs and hugo), as a test replacement for the current Intel NUC.

A current unknown - nothing like testing in production :) - is how well this small container will stand up to a lot of fedi-originated traffic. This is that test.

neilzone.co.uk/2025/04/trialli

Photo of me, a white man with a short dark beard, and dark hair, smiling at the camera, while sitting in front of a vintage terminal with green text on the screen.
neilzone.co.ukTrialling a move from a NUC to a container
More from Neil Brown
#FOSS#Linux#proxmox
Continued thread

Lastly, I have #immich in a #proxmox VM as a readonly viewer of the samba share so I can see photos on my phone and other devices. My devices connect to #wireguard when out of the house so they can still access the server to sync!

Hope that is helpful to someone, and let me know what I'm doing wrong and can improve!

🧵 4/4

Continued thread

Next I run #syncthing on my laptop, and on my #homelab #intelN100 #n100 mini pc / server that runs in the cupboard and is very #lowpower I run #proxmox and this also has a #samba share which allows any other network devices to see the media.

With syncthing running, I always have two copies of the media, but for backup I was using #rclone to send an encrypted copy to #googledrive - which I am in the process of switching over to #nextcloud running on #hetzner

🧵 3/4

Been testing out the #virtiofs support now baked into #proxmoxVE. It works, had to do some #selinux adjustments on #fedora to allow my #podman containers to use the mountpoint. Added this policy

```
(allow container_t unlabeled_t ( dir ( read write )))
```

In raw speed it is definitely not a winner - #nfs is easily double the speed. But on this particular VM I don't need the speed - it is nice that this is all self-contained now, and I can actually remove NFS altogether.

Bruh, I might've wasted my time learning how to passthrough a GPU to an #LXC container on #Proxmox (as well as mount a SMB/CIFS share) and write up a guide (haven't been able to test yet, cept with the latter) - all by doing some seemingly magic #Linux fu with some user/group mappings and custom configs, if it turns out that you could actually achieve the same result just as easily graphically using a standard wizard on PVE.

It's 4am, I'll prolly try to find time later during the day, or rather evening (open house to attend at noon), and try using the wizard to 1) Add a device passthrough on an LXC container for my
#AMD iGPU (until my #Intel #ArcA380 GPU arrives) and see if the root user + service user on the container could access it/use it for transcoding on #Jellyfin/#ErsatzTV, and 2) Add a SMB/CIFS storage on the Proxmox Datacenter, tho my #NAS is also just a Proxmox VM in the same cluster (not sure if this is a bad idea?) and see if I could mount that storage to the LXC container that way.

#Homelab folks who have done this, feel free to give some tips or wtv if you've done this before!

Seems like Snowflake proxy v2.11.0 uses more RAM, about 100 - 200 MB. This used to be around 20 - 50 MB.

Still have enough RAM on my Proxmox server to dedicate for this VM though so I now dedicate from 3 GB to 4 GB RAM since I run 20 snowflake proxy process on this VM.

I'm gonna check back later and see if it uses up all that 4 GB.

I use Fedora, btw.

I'm writing a guide on splitting a GPU passthrough across multiple #Proxmox #LXC containers based on a few resources, including the amazing Jim's Garage video.

Does anyone know the answer to this question of mine though, on why he might've chosen to map a seemingly arbitrary GID
107 on the LXC Container to the Proxmox host's render group GID of 104 - instead of mapping 104 -> 104, as he did with the video group, where he mapped 44 -> 44 (which seems to make sense to me)?

I've watched his video seemingly a million times, and referred to his incredibly simplified guide on his GitHub that's mostly only meant for copy-pasting purposes, and I couldn't quite understand why yet - I'm not sure if it really is arbitrary and
107 on the LXC Container could be anything, including 104 if we wanted to... or if it (i.e. 107) should've been the LXC Container's actual render group GID, in which case then it should've also been 104 instead of 107 on his Debian LXC Container as it is on mine.

Anyway, super excited to test this out once my
#Intel #ArcA380 arrives. I could probably already test it by passing through one of my node's Ryzen 5 5600G iGPU, but I worry if I'd screw something up, seeing that it's the only graphics onboard the node.

🔗 https://github.com/JamesTurland/JimsGarage/issues/141

Referencing to the following resources: https://youtu.be/0ZDr5h52OOE https://github.com/JamesTurland/JimsGarage/tree/main/LXC/Jellyfin May I know the reasoning behind the GID mapping choice for the...
GitHub[QUESTION] Clarification on GID mapping choice for render group · Issue #141 · JamesTurland/JimsGarageBy irfanhakim-as

On en parle moins, et pourtant, il est essentiel pour mener un projet à bien, surtout quand il y a des pertes de données 😉
Je parle de #Proxmox #Backup Server 😎
Ce dernier passe en version 3.4 🤙
Avec ça...
👉 Performances optimisées pour la collecte des déchets (purge/nettoyage) ;
👉 Sélection d'instantanés de sauvegarde granulaire pour les tâches de synchronisation ;
👉 Construction statique du client Proxmox Backup ;
👉 Augmentation du débit pour la sauvegarde de bande ;
...

I love #Podman, but gosh is it needlessly complicated (to setup, correctly) compared to #Docker. I'll continue using it over Docker on my systems, but recommending/advocating to people's sake (when it comes to containerisation), maybe I'll stick with Docker.

If you're just setting it up on your personal machine, it's easy - some aspects may even be simpler than Docker - but the moment you start getting into things like getting it to work on a
#Proxmox #LXC container... it gets messy real fast.