2025-03-15

single point of failure

i was hoping this will not come early, but it happened.

there was an issue with one of my virtual machines today. this vm in particular contains 3 important instances: matrix, xmpp, and my blog. so all 3 of them went down simultaneously while i was making breakfast.

since my sip service is integrated with xmpp, my phone is out too. so that prompt me to improve the high availability of my services. (and of course, i interrupt my own writing whenever i want, with whatever topics i want.)

here is list of services i would like to make them more durable:

the only question is how to improve it?

so i started to think... with the help of my brain.

connecting friend's proxmox server for high availability plan

when a vm fails in one location, i would prefer to use proxmox to spawn a clone of that vm in my friend's location. the more technical friends i have the more resilient my internet service will be.

dedicating physical servers for single purpose outside of proxmox

i need more cheap nuc.

distributing responsibility to onion network

hosting services over the onion network itself is rather simple. in fact, it's so simple that it's the most unlikely-to-fail part in my plan. all it needs is a internet cable and a raspberry pi.

this special rpi will run its service on the onion network. althought at the moment, i don't know what services to host on a rpi, it's likely something related to dns or communication.

probably email server. i sure suck at running it... or do i?

less self-hosting

if i host less services, it'll be less chance for things to fail.

to do this while not relying on paid or proprietary services, i could let my friends to host some services i need. then in return, i could host some service they need.

collectively as a cluster, there are less number of machines or virtual machines running, which lowers maintenance cost per machine. with more tinkering time per machine, and in turn, potentially better quality of service.