This file goes to characterize how I manage my private server in 2020. This might per probability probably concentrate on
- Management of secrets and techniques with SOPS and a GPG key
- Computerized administration of DNS file
- Configuration of Debian and arrange of Kubernetes k3s
- Setup Nginx ingress with let's encrypt for computerized TLS certificates
- Deployment of postfix + dovecot for deploying an e-mail server
- Install Nextcloud to decide on on up your private cloud inside the sky
- Placing backup in standing
- The utilization of Wireguard to decide on on up a personal group and WsTunnel to avoid firewalls
- Adding a Raspberry Pi to the K3s cluster
My goals for this setup are:
- Easy to deploy, manage and replace
- Every little factor should dwell inside the git repository
- Automating as noteworthy as in all probability with free tier service (GitHub actions) nonetheless reproducible inside the group
- Bundle and deploy the identical system system utility and my receive duties
- The highway up to now
- Creating GPG key
- Encrypting secrets and techniques with Sops
- Producing a peculiar ssh key
- Automating arrange with a Makefile
- Selected a server provider
- Accurate and Automate arrange of the horrible machine
- Selected your registrar for DNS
- Automate your DNS file replace
- Installing Kubernetes K3S
- Nginx as Ingress controller for Kubernetes
- CertManager with let's encrypt for issuing TLS certificates
- Mail Server with Postfix + Dovecot + Fetchmail + SpamAssassin
- Automating produce and push of our pictures with GitHub actions
- Net webhosting your receive cloud with nextcloud
- Backups
- [TODO] Monitoring with netdata
- VPN with Wireguard
- Bypass firewalls with WsTunnel
- Raspberry Pi as k8S node using your Wireguard VPN
- Deploying PiGap in your Raspberry Pi
- Conclusion
- Whereas you occur to would fancy further freedom
It has been bigger than 15 years now that I manage my receive devoted server, I'm in my thirties, and all of it began as a consequence of a chat from Benjamin Bayart Net libre, ou minitel 2.Zero or for the non-French "Free cyber net or minitel 2.0". Within the occasion you do not grasp any thought what's a minitel, let me quote for you Wikipedia.
The Minitel modified into as soon as a videotex on-line service accessible by mobile phone traces, and modified into as soon as the world's most a success on-line service forward of the World Wide Net. It modified into as soon as invented in Cesson-Sévigné, close to Rennes in Brittany, France.
Within the essence, the controversy is prepared creating consciousness in 2007 that the web is beginning to lose its decentralized nature and survey fancy further to a minitel 2.Zero as a consequence of our reliance on centralized huge corp for about all of the items on the Net. This warning ring even louder in this point in time with the introduction of the Cloud the place our computer systems are actually legitimate a love present for getting access to knowledge/compute remotely.
I went from hardcore extremist, by webhosting a server fabricated from scrap supplies inside the assist of my mum or dad house mobile phone line, using Gentoo to recompile all of the items, protect watch over each USE flags and grasp the perfect pleasure of including -mtune=native
to the compilation current line. Some years later, after being fed up with having to make use of nights to recompile all of the items on an outdated Intel Pentium Three as a consequence of I missed a USE flag that changed into as soon as distinguished to assign that uncommon software, I switched to Debian.
At that point I assumed I had the supreme setup, legitimate assign an suited-opt up arrange
and you have your software assign in in a fast time, is there the comfort bigger than that primarily ?
It modified into as soon as on the 2nd moreover that I switched from webhosting my server at my mum or dad house to a webhosting firm. I modified into as soon as out for varsity and calling my of us to place a matter to them to reboot the machine as a consequence of it froze as a consequence of ageing elements modified into as soon as taking too noteworthy time. I modified into as soon as residing inside the fixed misfortune of shedding some emails and associates on IRC had been complaining that the archive/historic earlier of the channel that my server modified into as soon as offering modified into as soon as now not accessible anymore. So as nerve-racking as a result of the chance had been, particularly since all of the items modified into as soon as assign in by hand with out configuration administration, I went to ascertain my of us to uncover them that I'm laying aside the server from their care to host it on on-line.web
and that they should place a matter to even fewer calls from me to any extent further.
Well to assign of this uncommon available bandwidth and after porting my handbook deployments to Ansible, I primarily thought this time I had the supreme setup. Easy to arrange and configured administration ! Is there the comfort bigger than that primarily ?
I had discovered my cruise boat and sailed peacefully with it till the dependencies monsters knocked me off board. If you are making an attempt to CRAM all of the items (mail, webserver, gitlab, pop3, imap, torrent, owncloud, munin, ...) appropriate right into a single machine on Debian, you inside the cease stop-up activating unstable repository to decide on on up essentially the most up to date model of features and stop-up with conflicting variations between softwares to the aim that doing an suited-opt up replace && suited-opt up beef up
is now your nemesis.
Whereas fending off system beef up, I spent a whereas taking half in with Kvm/Xen, FreeBSD jails, Illumos, micro-kernel (I assumed this will likely per probability probably very neatly be the longer term :x) and the weird participant in city Docker ! I ended-up using Docker as a consequence of being too busy/sluggish to reinstall all of the items on one factor uncommon and Docker allowed me to incessantly isolate/patch the software that had been tense me. Hi there Python duties !
This hybrid setup labored for a whereas, nonetheless it utterly felt clunky to manage, particularly with Ansible inside the combine. I ended-up transferring all of the items into containers, now not with out anguish
So I spent a petite of time this month to decide on on up and share with you my excellent uncommon setup for managing a personal server in 2020 !
So let's supply. Step one is to decide on on up a GPG key. This key will discount to encrypt each secret now we grasp in uncover to be able to commit them inside the git repository. With secrets and techniques inside git, our repository will seemingly be prepared be standalone and conveyable throughout machines. We could be able to assign a git clone
and choose up working !
This GPG key will seemingly be the guardian of your infrastructure, if it leaks, anyone will seemingly be prepared to decide on on up admission to your infra. So retailer it someplace obedient
gpg --armor --export erebe@erebe.eu > pub.asc gpg --armor --export-secret-key erebe@erebe.eu > private.asc
Now that now we grasp a PGP key, we are able to train the fabulous software SOPS to encrypt our secrets and techniques with it.
Sops is now not completely recognized, nonetheless very sensible and straight ahead to make train of, which is an enormous plus for as soon as in a security software.
To make train of it, choose up a config file on the basis of your repository
❯ cat .sops.yaml
creation_rules:
- gpg: >-
YOUR_PGP_FINGERPRINT_WITHOUT_SPACE
After that legitimate invoke sops to decide on on up a peculiar secret with your GPG key. Sops strain using YAML, so your file have to be a succesful YAML.
❯ mkdir secrets and techniques secrets_decrypted ❯ sops secrets and techniques/foobar.yml * editor with default values * ❯ cat secrets and techniques/foorbar.yml # mutter materials of file is encrypted now hello there: ENC[AES256_GCM,data:zpzQz+siZxcshJjmi4PBvX2GMm3sWibxRPCgil2mi+c6AQ0uXEBLM2lL0o+BBg==, ...
To decrypt your secrets just do a
sops -d --output secrets_decrypted/foobar.yml secrets/foorbar.yml
Info If you have an issue like this one below when trying to decrypt
- | could not decrypt data key with PGP key:
| golang.org/x/crypto/openpgp error: Could not load secring:
| open /home/chronos/user/.gnupg/secring.gpg: no such file or
| directory; GPG binary error: exit status 2
Try doing in your terminal
GPG_TTY=$(tty)
export GPG_TTY
https://github.com/mozilla/sops/issues/304#issuecomment-377195341
There are other commands that allow you to avoid dumping your decrypted secrets onto the file system. If you are interested in this feature look at
sops exec-env
# or
sops exec-file
Now that we are able to store secrets securely within our repository, it is time to generate a new ssh key in order to be able to log to our future next server.
We are going to set a passphrase to our ssh keys and use a ssh-agent/keychain in order to avoid typing it every time
# Don't forget to set a strong passphrase and change the default name for your key from id_rsa to something else, it will be usefull later on ssh-keygen # To add your ssh ke into the keyring eval $(keychain --eval --agents ssh ~/.ssh/your_private_key)
We are going to commit this ssh key into the repository with sops
sops secrets/ssh.yml # edit the yaml file to get 2 section for your private and public ssh key # paste the content of your keys in this section git add secrets/ssh.yml git commit -m 'Adding ssh key'
Now we want for this repository to be self-contained and easily portable across machines. A valid approach would have been to use Ansible in order to automate our deployment. But we will not tap a lot into the full power of a configuration management in this setup, so I chose to use a simple makefile to automate the deployment.
❯ mkdir config ❯ cat Makefile .PHONY: install install: sops -d --extract '["public_key"]' --output ~/.ssh/erebe_rsa.pub secrets and techniques/ssh.yml sops -d --extract '["private_key"]' --output ~/.ssh/erebe_rsa.key secrets and techniques/ssh.yml chmod 600 ~/.ssh/erebe_rsa.* grep -q erebe.eu ~/.ssh/config > /dev/null 2>&1 || cat config/ssh_client_config >> ~/.ssh/config mkdir ~/.kube || exit 0
The arrange share is decrypting the ssh keys, arrange them and taking a relate into my ~/.ssh/config to ascertain if I already grasp a share for my server in uncover to be succesful so as to add it if lacking. With that I'll seemingly be able to assign a ssh my-server
and choose up all of the items setup exactly
We grasp a git repository with our ssh keys, so now's the time to make train of those keys and choose up a precise server inside the assist of it.
For my portion I train a 1st tier dedibox from on-line.web
now renamed into scaleway
for 8€ month-to-month. Their machine is rock precise, low-label and by no means had an argument with them since bigger than 15 years. It's in all probability you may additionally very neatly be free to selected no matter provider you want nonetheless right here my options for the ingredient to survey at
- Disk house: The utilization of containers exhaust a amount of disk house. So fetch a machine and not using a longer now not as much as 60G of house.
- Public bandwidth limitation: All webhosting firm throttle public bandwidth to protect a ways from insist with torrent seedbox. So the elevated you resolve up for a similar label, the upper it is (i.e: scaleway present 250Mbits/s whereas OVH most positive 200Mbits)
- Free backup storage: At some level we are able to grasp knowledge to backup, so survey inside the occasion that they provide some exterior storage for backups
- IPv6: They should current IPv6, now not distinguished nonetheless it utterly is 2020
- Domain title/Free mail account: Whereas you occur to suggest to make train of them as a registrar to your area title, survey inside the occasion that they can present you e-mail account storage in uncover to configure them as fallback to now not lose mail
Whenever you have acquired your server provider, assign the arrange and protect Debian for the OS. At some level they'll place a matter to you to your ssh key, so present the one you created earlier.
Whereas you occur to've acquired the chance to protect your filesystem train XFS as an completely different of ext4 because it provides upright assist for container runtime.
If all of the items is assign in exactly you should be able to assign a
ssh root@ip.of.the.server
The machine is in standing and reachable to the open air world.
First ingredient to assign is secure it ! We want to :
- Enable computerized security replace
- Tighten SSH server choose up admission to
- Restrict group choose up admission to
Enable computerized security replace
Let's supply by enabling the computerized security replace of Debian.
In our Makefile
HOST=${my-server} .PHONY: arrange gear # ... gear: ssh ${HOST} 'suited-opt up replace && suited-opt up arrange -y curl htop mtr tcpdump ncdu vim dnsutils strace linux-perf iftop' # Enable computerized security Updates ssh ${HOST} 'echo "unattended-upgrades unattended-upgrades/enable_auto_updates boolean upright" | de bconf-space-alternatives && suited-opt up arrange unattended-upgrades -y'
With that the machine is placing in security replace by its receive, with out requesting us to kind manually suited-opt up replace && suited-opt up beef up
Accurate SSH server
Subsequent is bettering the safety of our ssh server.
We're going to disable password authentication and permitting most positive public key authentication.
As our ssh keys are encrypted in our repository, they'll be continuously available to us if wanted (as prolonged as now we grasp the GPG key).
The main config alternate options to your sshd_config
PermitRootLogin prohibit-password PubkeyAuthentication certain AllowUsers erebe root X11Forwarding no StrictModes certain IgnoreRhosts certain
As I assign now not train any configuration administration (i.e: Ansible), It is roughly behind to make train of a favourite person and leverage privilege escalation (sudo) to assign stuff in root
. So I permit root login on the SSH server to originate issues extra easy to manage. Whereas you occur to suggest to make train of a configuration administration system, disable the basis login authentication.
Now let's train another time our Makefile to automate the deployment of the config
Warning Make apparent that that you may also very neatly be exactly able to log with your ssh key prior to doing that in any other case you will should reinstall your machine/train the rescue console of your webhosting provider to restore issues
❯ Makefile .PHONY: arrange gear ssh #... # Test if the file is diversified from our git repository and whether or not it is a ways the case re-add and restart the ssh server ssh: ssh ${HOST} "cat /and many others/ssh/sshd_config" | diff - config/sshd_config || (scp config/sshd_config ${HOST}:/and many others/ssh/sshd_config && ssh ${HOST} systemctl restart
whereas you should lumber a step additional, that you may also
- Substitute default ssh server port
- Disallow root authentication
- Enable 2-element authentication with google-authentificator (it could now not contact google)
Accurate Network choose up admission to
Final share of the thought is to secure the group by hanging in standing firewall options.
I want to forestall cease to the specific issues, so I train appropriate away iptables to decide on on up my firewall options. Here's on the price of attending to repeat the rules for IPv4 and IPv6.
Whereas you occur to would receive to simplify you the obligation please train UFW - Uncomplicated Firewall
We want for our deployment of iptables options to be idempotent, so we will select on up a personalised chain to protect a ways from messing with the default one.
Furthermore, I might make train of iptables
current appropriate away as an completely different of iptable-restore
, as a consequence of iptables-restore
recordsdata have to be holistic and would now not improve when applications manage most positive a subpart of the firewall options. As we will arrange Kubernetes afterward, this will likely per probability probably goal permit us to protect a ways from messing with proxy options.
#!/bin/sh # Attain most positive when it is for our main NIC [ "$IFACE" != "enp1s0" ] || exit 0 # In uncover to decide on on up an IPv6 lease/route from on-line.web sysctl -w web.ipv6.conf.enp1s0.accept_ra=2 ########################### # IPv4 ########################### # Reset our personalised chain iptables -P INPUT ACCEPT iptables -D INPUT -j USER_CUSTOM iptables -F USER_CUSTOM iptables -X USER_CUSTOM iptables -N USER_CUSTOM # Enable loopback interface iptables -A USER_CUSTOM -i lo -j ACCEPT # Enable wireguard interface iptables -A USER_CUSTOM -i wg0 -j ACCEPT # Enable Kubernetes interfaces iptables -A USER_CUSTOM -i cni0 -j ACCEPT iptables -A USER_CUSTOM -i flannel.1 -j ACCEPT # Enable already favourite connections iptables -A USER_CUSTOM -p tcp -m conntrack --ctstate ESTABLISHED,RELATED -j ACCEPT iptables -A USER_CUSTOM -p udp -m conntrack --ctstate ESTABLISHED,RELATED -j ACCEPT iptables -A USER_CUSTOM -p icmp -m conntrack --ctstate ESTABLISHED,RELATED -j ACCEPT # Safe incoming ICMP - Server provider train ping to music the machine iptables -A USER_CUSTOM -p icmp -j ACCEPT # Enable ssh iptables -A USER_CUSTOM -p tcp --dport 22 -j ACCEPT # Enable http/https iptables -A USER_CUSTOM -p tcp --dport 80 -j ACCEPT iptables -A USER_CUSTOM -p tcp --dport 443 -j ACCEPT # Enable SMTP and IMAP iptables -A USER_CUSTOM -p tcp --dport 25 -j ACCEPT iptables -A USER_CUSTOM -p tcp --dport 993 -j ACCEPT # Enable wireguard iptables -A USER_CUSTOM -p udp --dport 995 -j ACCEPT # Enable kubernetes k3S api server # We're going to disable it after establishing our VPN to now not uncover it to cyber net iptables -A USER_CUSTOM -p tcp --dport 6443 -j ACCEPT # Add our personalised chain iptables -I INPUT 1 -j USER_CUSTOM # DROP INCOMING TRAFFIC by default if nothing match iptables -P INPUT DROP ####### #IPv6 ####### #assign the identical ingredient with ip6tables as an completely different of iptables # Safe incoming ICMP ip6tables -A USER_CUSTOM -p icmpv6 -j ACCEPT # Enable ipv6 route auto configuration in case your provider assist it ip6tables -A USER_CUSTOM -p udp --dport 546 -j ACCEPT ip6tables -A USER_CUSTOM -p icmpv6 --icmpv6-form router-commercial -j ACCEPT ip6tables -A USER_CUSTOM -p icmpv6 --icmpv6-form router-solicitation -j ACCEPT ip6tables -A USER_CUSTOM -p icmpv6 --icmpv6-form neighbour-commercial -j ACCEPT ip6tables -A USER_CUSTOM -p icmpv6 --icmpv6-form neighbour-solicitation -j ACCEPT ip6tables -A USER_CUSTOM -p icmpv6 --icmpv6-form echo-ask -j ACCEPT ip6tables -A USER_CUSTOM -p icmpv6 --icmpv6-form echo-acknowledge -j ACCEPT
I assign now not value prohibit ssh connections, as further usually than now not it is me that's hit by that prohibit. At the 2nd most bot scanning ssh servers are excellent ample to time their makes an are attempting to protect a ways from being value restricted.
Even supposing we're most positive permitting public key authentication, some bots are going to attempt and not using a cease in sight to attach with our SSH server, in hope that inside the future a breach seems. Worship waves crashing tirelessly on the shore.
Whereas you occur to would receive to allow it anyway, it is best so as to add these options
-A USER_CUSTOM -p tcp -m conntrack --ctstate NEW --dport 22 -m latest --space --name SSH -A USER_CUSTOM -p tcp -m conntrack --ctstate NEW --dport 22 -m latest --update --seconds 60 --hitcount 4 --rttl --name SSH -j DROP -A USER_CUSTOM -p tcp -m conntrack --ctstate NEW --dport 22 -j ACCEPT
We're going to deploy these options inside the if-pre-up
to revive them robotically when the machine reboots. As these options are idempotent we strain their execution when invoked to make certain they're in standing.
iptables: scp config/iptables ${HOST}:/and many others/group/if-pre-up.d/iptables-restore ssh ${HOST} 'chmod +x /and many others/group/if-pre-up.d/iptables-restore && sh /and many others/group/if-pre-up.d/iptables-restore'
Now that now we grasp a server provisioned and a petite further secure, we want to connect it a cute DNS title as an completely different of legitimate its IP deal with.
Whereas you occur to hold no thought what's a DNS please discuss over with :
- Wikipedia
- Cloudflare weblog put up
Worship for the server provider, that you may also very neatly be free to selected no matter you want right here.
I for my portion train GANDI.web, as they provide free mailbox with a area title. Whereas I plug a postfix/mail server on my server to obtain and retailer emails, to protect a ways from having to area up and manage a behind DKIM I train GANDI SMTP mail server to ship my emails and be trusted/now not cease up as spams. More on that later in setup your mail server.
Whereas you occur to hold no thought which one to fetch, right here the aim I assign a matter to:
- Provide an API to manage DNS file
- Propagation should be like a flash ample (whereas you plan to make train of let's encrypt DNS misfortune for wildcard)
- Provide DNSSEC (I assign now not train it for my portion)
Beside that each registrar are the identical. I counsel you using
- Your webhosting firm to your registrar in uncover to centralize issues
- Exercise Cloudflare whereas you plan to setup a weblog afterward
This one is simple, legitimate should choose up the figuring out of tips on how to make train of the API of your registrar.
For gandi, we are able to train their cli gandi
and manage our zone in a straightforward textual mutter materials file.
In our Makefile, it provides one factor fancy
dns:
sops -d --output secrets_decrypted/gandi.yml secrets and techniques/gandi.yml
GANDI_CONFIG='secrets_decrypted/gandi.yml' gandi dns replace erebe.eu -f dns/zones.txt
with our zones.txt
file taking a relate
@ 10800 IN SOA ns1.gandi.web. hostmaster.gandi.web. 1579092697 10800 3600 604800 10800
@ 10800 IN A 195.154.119.61
@ 10800 IN AAAA 2001:bc8:3d8f::cafe
@ 10800 IN MX 1 mail.erebe.eu.
@ 10800 IN MX 10 spool.mail.gandi.web.
@ 10800 IN MX 50 fb.mail.gandi.web.
api 10800 IN A 195.154.119.61
api 10800 IN AAAA 2001:bc8:3d8f::cafe
...
Relying out of your registrar, FAI and the TTL you area in your recordsdata, it could actually presumably fetch fairly a whereas for the weird file to be propagated/up up to now in each single standing, so wait and see !
We grasp a server secured, with a area title related, and that we are able to re-set up comfortable.
The following step is to arrange Kubernetes on it. The assortment of Kubernetes can also be a petite controversial for many positive using it on a single machine. Kubernetes is a container orchestrator, in order that that you may also most positive leverage its corpulent power when managing a hasty of servers.
Apart from to, operating vanilla Kubernetes require placing in ETCD and diversified heavyweight elements, plus some difficulties configuring each module for them to work exactly collectively.
Fortunately for us an completely different to this heavy/manufacturing vanilla arrange exists.
Meet K3S, a trimmed and packaged Kubernetes cluster in a single binary. This prodigy is offered to us by rancher labs, one of many distinguished big participant inside the container operator world. They took the chance for you (altering ETCD by SQLite, group overlay, load balancer, ...) in uncover for k3s to be the smallest in all probability and straight ahead to setup. Yet it is a 100% compliant Kubernetes cluster.
The main goal applicable factor about getting Kubernetes assign in on my server, is that it permit me to hold a used interface for all my deployments, grasp all of the items retailer in git and permits me to leverage diversified devices fancy skaffold as soon as I'm establishing my duties. My server is moreover my playground, so it is huge to forestall involved with the love stuff of the 2nd.
Warning With all of the items assign in, legitimate having the Kubernetes server elements operating add a {5%, 10%} CPU on my Intel(R) Atom(TM) CPU C2338 @ 1.74GHz 2cores
. So whereas that you may also very neatly be already CPU straggle, assign now not train it or scale up your server.
Let's supply, to arrange K3s nothing further refined than
kubernetes_install: ssh ${HOST} 'export INSTALL_K3S_EXEC=" --no-deploy servicelb --no-deploy traefik --no-deploy local-storage"; curl -sfL https://choose up.k3s.io | sh -'
We're disabling some further elements as we do not want them. Particularly:
servicelb
Every little factor will dwell on the identical machine, so there is no such thing as a longer any should load steadiness, further usually than now not we will protect a ways from the group overlay moreover, by using the host group appropriate away as noteworthy as in all probabilitytraefik
I grasp further experience with Nginx/HAProxy for reverse-proxy, so I might make train of nginx ingress controller barely than Traefik. The fact is be at liberty to make train of it whereas you wantlocal-storage
this utility is for creating robotically native amount (PV) to your hosts, as now we grasp most positive one machine, we are able to bypass this complexity and legitimate trainHostPath
amount
After operating this current, that you may also ssh in your server and assign a
sudo kubectl choose up nodes # logs are available with # sudo journalctl -feu k3s
and try that your server is in Ready convey (it could actually presumably fetch a whereas). If it is a ways the case, congrats ! You grasp a Kubernetes protect watch over airplane working !
Now that right here is carried out, we should all the time automate the setup of the kubeconfig arrange.
To your server copy the mutter materials of the kube config file /and many others/rancher/k3s/k3s.yaml
and encrypt it with sops beneath secrets and techniques/kubernetes-config.yml
. Make sure to Substitute 127.0.0.1 inside the config by the ip/area title of your server.
After that, to your Makefile add inside the arrange share
arrange: ... mkdir ~/.kube || exit 0 sops -d --output ~/.kube/config secrets and techniques/kubernetes-config.yml
Whereas you occur to made issues exactly and that you've acquired kubectl assign in in your native machine, you should be able to assign a
kubectl choose up nodes
and be taught your server prepared !
I grasp many small pet duties exposing an HTTP endpoint that I want to uncover to the comfort of the web. As I grasp blocked each ingoing net mutter guests diversified than for port 80 and 443, I need to multiplex each utility beneath these two. For that I need to arrange a reverse proxy that may moreover assign TLS termination.
As I grasp disabled Traefik, the default reverse-proxy, for the size of the k3s arrange, I need to arrange my receive. My threat went to Nginx. I are privy to it neatly with HaProxy, is aware of it is official and it is a ways essentially the most gentle between the 2 on Kubernetes.
To arrange it in your K3s cluster each train the Helm chart or appropriate away with a kube apply. Refer for the arrange handbook for baremetal
WARNING: Don't copy-paste appropriate a ways from the documentation nginx-ingress annotations, the '-' is now not a precise '-' and your annotation might per probability probably presumably now not be recognized
To deal with a ways from having to manage moreover helm deployment, I might arrange it appropriate a ways from the YAML recordsdata available at
https://uncooked.githubusercontent.com/kubernetes/ingress-nginx/controller-v0.40.2/deploy/static/provider/baremetal/deploy.yaml
I'm legitimate bettering the deployment in uncover for the Nginx reverse proxy to make train of HostNetwork
and protect a ways from going thought the group overlay.
Within the above YAML file, change DNS coverage value by ClusterFirstWithHostNet
and add a peculiar entry hostNetwork: upright
for the container to make train of appropriate away your group card as an completely different of a digital interface.
# Source: ingress-nginx/templates/controller-deployment.yaml apiVersion: apps/v1 type: Deployment ... spec: dnsPolicy: ClusterFirstWithHostNet hostNetwork: upright containers: ...
Whereas you occur to might per probability probably very neatly be using the helm chart, there's a variable/flag to toggle the utilization of host group.
Place your YAML file to your repository and replace your Makefile to deploy it
k8s: # Whereas you occur to make the most of helm #helm3 repo add precise https://kubernetes-charts.storage.googleapis.com/ #helm3 repo replace # helm repo add ingress-nginx https://kubernetes.github.io/ingress-nginx # helm arrange ingress-nginx ingress-nginx/ingress-nginx --space controller.hostNetwork=upright kubectl apply -f k8s/ingress-nginx-v0.40.2.yml kubectl wait --namespace ingress-nginx --for=scenario=prepared pod --selector=app.kubernetes.io/part=controller --timeout=120s
For added knowledge regarding Nginx as ingress, please discuss over with the Documentation
Whereas you occur to setup all of the items exactly you should be taught one factor fancy
❯ kubectl choose up pods -o big -n ingress-nginx
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
ingress-nginx-admission-opt up-gzpvj 0/1 Performed 0 9d 10.42.0.106 erebe-server none> none>
ingress-nginx-admission-patch-hs457 0/1 Performed 0 9d 10.42.0.107 erebe-server none> none>
ingress-nginx-controller-5f89b4b887-5wxmd 1/1 Working 0 8d 195.154.119.61 erebe-server none> none>
with the IP of your ingress-nginx-controller being the ip of your main interface
erebe@erebe-server: ~$ ip a 1: lo: LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue convey UNKNOWN group default ... 2: enp1s0: BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq convey UP group default qlen 1000 hyperlink/ether 00: 08:a2:0c: 63:4e brd ff:ff:ff:ff:ff:ff inet 195.154.119.61/24 brd 195.154.119.255 scope world enp1s0 valid_lft eternally preferred_lft eternally
erebe@erebe-server: ~$ sudo ss -lntp | grep -E ':(80|443) ' LISTEN 0 128 0.0.0.0: 80 0.0.0.0: * clients: (("nginx",pid=32448,fd=19),("nginx",pid=22350,fd=19)) LISTEN 0 128 0.0.0.0: 80 0.0.0.0: * clients: (("nginx",pid=32448,fd=11),("nginx",pid=22349,fd=11)) LISTEN 0 128 0.0.0.0: 443 0.0.0.0: * clients: (("nginx",pid=32448,fd=21),("nginx",pid=22350,fd=21)) LISTEN 0 128 0.0.0.0: 443 0.0.0.0: * clients: (("nginx",pid=32448,fd=13),("nginx",pid=22349,fd=13)) LISTEN 0 128 [::]: 80 [::]: * clients: (("nginx",pid=32448,fd=12),("nginx",pid=22349,fd=12)) LISTEN 0 128 [::]: 80 [::]: * clients: (("nginx",pid=32448,fd=20),("nginx",pid=22350,fd=20)) LISTEN 0 128 [::]: 443 [::]: * clients: (("nginx",pid=32448,fd=14),("nginx",pid=22349,fd=14)) LISTEN 0 128 [::]: 443 [::]: * clients: (("nginx",pid=32448,fd=22),("nginx",pid=22350,fd=22))
to ascertain that every one the items is working that you may also deploy these sources and try that that you may also choose up admission to your http://area.title
with the listing of recordsdata of the container displayed
apiVersion: apps/v1 type: Deployment metadata: title: check out labels: app: check out spec: replicas: 1 technique: kind: Recreate selector: matchLabels: app: check out template: metadata: labels: app: check out spec: hostNetwork: upright dnsPolicy: ClusterFirstWithHostNet containers: - title: webserver picture: python:3.9 imagePullPolicy: IfNotCurrent current: ["python"] args: ["-m", "http.server", "8083"] ports: - title: http containerPort: 8083 --- apiVersion: v1 type: Carrier metadata: title: check out spec: selector: app: check out ports: - protocol: TCP port: 8083 title: http clusterIP: None kind: ClusterIP --- apiVersion: extensions/v1beta1 type: Ingress metadata: title: have a look at-ingress annotations: kubernetes.io/ingress.class: "nginx" spec: options: - http: paths: - route: / backend: serviceName: check out servicePort: http
This deployment, will supply a python straightforward HTTP server on port 8083 on host group, a service will reference this deployment and the ingress (the configuration for our reverse proxy) will seemingly be configured to level in opposition to it on route /
To debug that you may also check out
# To analyze python straightforward http server pod insist kubectl characterize pod check out # To be taught endpoints listed by the service kubectl characterize service check out # To be taught ingress insist kubectl characterize service check out # To take a look at the config of nginx kubectl exec -ti -n ingress-nginx ingress-nginx-controller-5f89b4b887-5wxmd -- cat /and many others/nginx/nginx.conf
We grasp our reverse proxy working, now we want our k3s cluster to be able to generate on the waft certificates for our deployments. For that we will make train of the favourite CertManager with let's encrypt as a backend/issuer.
to arrange it merely assign a
kubectl apply -f https://github.com/jetstack/cert-supervisor/releases/obtain/v1.0.4/cert-supervisor.yaml
to automate the deployment we will be succesful so as to add it inside the repository and add a pair of traces in our Makefile
k8s: ... kubectl apply -f k8s/cert-supervisor-v1.0.4.yml
It's in all probability you may additionally confirm the arrange with
$ kubectl choose up pods --namespace cert-supervisor NAME READY STATUS RESTARTS AGE cert-supervisor-5c6866597-zw7kh 1/1 Working 0 2m cert-supervisor-cainjector-577f6d9fd7-tr77l 1/1 Working 0 2m cert-supervisor-webhook-787858fcdb-nlzsq 1/1 Working 0 2m
As quickly as Cert-Manager is deployed we should all the time configure an Issuer that's going to generate succesful TLS certificates. For that we will make train of the free let's encrypt !
To assign that merely deploy a peculiar useful resource on the cluster
apiVersion: cert-supervisor.io/v1 type: ClusterIssuer metadata: title: letsencrypt-prod namespace: cert-supervisor spec: acme: server: https://acme-v02.api.letsencrypt.org/listing e-mail: e-mail@your_domain.title non-publicKeySecretRef: title: letsencrypt-prod solvers: - http01: ingress: class: nginx
This might per probability probably uncover cert supervisor, that we will make train of the acme HTTP misfortune of let's encrypt and train nginx as ingress for that.
The issuer is configured for the whole cluster (with type: ClusterIssuer
) so it could actually presumably
- Seek on all namespaces for the annotation
cert-supervisor.io/cluster-issuer: "letsencrypt-prod"
- Ask a misfortune from let's encrypt to (re-)generate TLS certificates
- Build a secret with these uncommon certificates upon misfortune success
In our Makefile
k8s: ... kubectl apply -f k8s/lets-encrypt-issuer.yml
Warning: Make apparent that your DNS title is succesful/pointing to the upright machine prior to doing that, as it is simple to be blacklisted/throttled by let's encrypt. Particularly whereas that you may also very neatly be using DNS misfortune for getting wildcard certificates.
Whereas you occur to configured all of the items exactly bettering our outdated ingress and including
a cluster issuer annotation, a TLS share inside the spec and a number inside the tips is ample.
--- apiVersion: extensions/v1beta1 type: Ingress metadata: title: have a look at-ingress annotations: kubernetes.io/ingress.class: "nginx" cert-supervisor.io/cluster-issuer: "letsencrypt-prod" #Here spec: tls: # right here - hosts: - area.title secretName: have a look at-tls options: - host: area.title # right here http: paths: - route: / backend: serviceName: check out servicePort: 8083
After making use of the weird model of the ingress, the cert supervisor should detect the annotation and launch a misfortune. (It's in all probability you may additionally check out the pods for ACME misfortune being spawned) and after a fast time getting your TLS certificates deployed as a secret.
$ kubectl choose up secrets and techniques have a look at-tls
If all of the items is okay, merely train your browser and search the recommendation of with https://area.title
to ascertain our straightforward python backend being deployed with a succesful TLS certificates !!!
Now I might arrange a mail server on my machine with a pair of caveats.
I'm now not going to make train of this SMTP server as an outgoing mail server, as a consequence of in this point in time it supposes to setup and protect DKIM, SPF, DMARC and even as soon as I grasp completed so, sometimes my emails had been ending-up in spam.
The price is now not value it, so I'm using my registrar gandi.web SMTP server as relay to ship my emails.
I'm now not going to enter into the precept factors of tips on how to configure postfix + dovecot + fetchmail + spamassassin as there are already a ramification of guides available for that on the web. My perform is to point how I train Kubernetes to originate all of them work collectively.
For added knowledge that you may also discuss over with my repository to survey into the ingredient. The extreme diploma overview is :
- Cert-Manager insist succesful TLS certificates which can be frail by dovecot and postfix
- Postfix is configured with digital alias to allow emails from
*@my_domain.title
- Postfix would now not train any database (so no MySQL)
- Every mail are redirected to a single person, that plug
procmail
with a personalised program hmailfilter to triage robotically my e-mail (Warning procmail is since a pair of years unmaintained and comprises CVEs) - Emails are saved inside the
maildir
construction - Dovecot and postfix be in contact by sharing this single maildir by mounting the identical hostPath amount in each container
- I assign now not tag my personalised container pictures, GitHub motion is configured on each push to rebuild the picture of {postfix, dovecot} and to place up them beneath
most up to date
- I train trunk deployment for my pictures. I merely delete essentially the most up to date pod and let it recreate itself with using
imagePullPolicy: Continually
to decide on on up essentially the most up to date model
So let's supply, first replace your MX DNS file to heed your server
@ 10800 IN MX 1 mail.erebe.eu. @ 10800 IN MX 10 spool.mail.gandi.web. @ 10800 IN MX 50 fb.mail.gandi.web.
In my setup I add my registrar SMTP server as a security web in case my server is down. Fetchmail is configured to retrieve from it any emails it could actually presumably additionally goal grasp obtained for me.
Subsequent step is to decide on on up a succesful TLS certificates for each:
- Postfix as we want to assist STARTTLS/SSL
- Dovecot, I most positive permit IMAPs and assign now not want self-signed certificates warning pop-ups
For that we merely train Kubernetes cert-supervisor and choose up a Certificate useful resource
apiVersion: cert-supervisor.io/v1 type: Certificate metadata: title: dovecot-tls spec: # Secret names are continuously required. secretName: dovecot-tls size: 2160h # 90d renewBefore: 720h # 30d subject: organizations: - erebe # The utilization of the overall title subject has been deprecated since 2000 and is # heart-broken from being frail. isCA: unsuitable non-publicKey: algorithm: RSA encoding: PKCS1 measurement: 2048 usages: - server auth - shopper auth # Not now not as much as one among a DNS Establish, URI, or IP deal with is required. dnsNames: - mail.your_domain.title # Issuer references are continuously required. issuerRef: title: letsencrypt-prod # We can reference ClusterIssuers by altering the shape right here. # The default value is Issuer (i.e. a inside the group namespaced Issuer) type: ClusterIssuer
With that the cert-supervisor will insist a certificates beneath the precept dovecot-tls
signed by let's encrypt.
After a fast time, your secret will seemingly be available
❯ kubectl characterize secret dovecot-tls
Establish: dovecot-tls
Namespace: default
Labels: none>
Annotations: cert-supervisor.io/alt-names: mail.erebe.eu
cert-supervisor.io/certificate-name: dovecot-tls
cert-supervisor.io/overall-name: mail.erebe.eu
cert-supervisor.io/ip-sans:
cert-supervisor.io/issuer-community:
cert-supervisor.io/issuer-kind: ClusterIssuer
cert-supervisor.io/issuer-name: letsencrypt-prod
cert-supervisor.io/uri-sans:
Form: kubernetes.io/tls
Recordsdata
====
tls.crt: 3554 bytes
tls.key: 1679 bytes
after that we are able to inject these certificates inside the container as a consequence of volumes in our deployment
apiVersion: apps/v1 type: Deployment metadata: title: dovecot labels: app: dovecot spec: replicas: 1 technique: kind: Recreate selector: matchLabels: app: dovecot template: metadata: labels: app: dovecot spec: hostNetwork: upright dnsPolicy: ClusterFirstWithHostNet containers: - title: mail picture: erebe/dovecot:most up to date imagePullPolicy: Continually ports: - containerPort: 993 volumeMounts: - title: dovecot-tls mountPath: /and many others/ssl/dovecot/ learnOnly: upright - title: dovecot-customers-password mountPath: /and many others/dovecot/clients/ learnOnly: upright - title: mail-data mountPath: /knowledge volumes: - title: dovecot-tls secret: secretName: dovecot-tls - title: dovecot-customers-password secret: secretName: dovecot-customers-password - title: mail-data hostPath: route: /choose/mail/knowledge kind: Itemizing
In my deployments:
- We're using host group
- All the information are saved on the host file system beneath
/choose/xxx
in uncover to with out map again backup it - All the container train the identical created person ID 1000 for writing knowledge to protect a ways from conflicting rights
- Password are saved as Kubernetes secrets and techniques and dedicated inside the repository as a consequence of sops
apiVersion: v1 type: Secret metadata: title: dovecot-customers-password kind: Opaque stringData: clients: 'erebe:{MD5-CRYPT}xxxxx.: 1000: 1000::::/bin/unsuitable:: '
Within the cease, deploying dovecot from our makefile is a straightforward
dovecot: sops -d --output secrets_decrypted/dovecot.yml secrets and techniques/dovecot.yml kubectl apply -f secrets_decrypted/dovecot.yml kubectl apply -f dovecot/dovecot.yml
For postfix it is a ways the identical, and we're reusing the outdated created TLS certificates for offering STARTTLS/SSL assist for the SMTP server
apiVersion: apps/v1 type: Deployment metadata: title: postfix labels: app: postfix spec: replicas: 1 technique: kind: Recreate selector: matchLabels: app: postfix template: metadata: labels: app: postfix spec: hostNetwork: upright dnsPolicy: ClusterFirstWithHostNet containers: - title: postfix picture: erebe/postfix:most up to date imagePullPolicy: Continually ports: - containerPort: 25 volumeMounts: - title: dovecot-tls mountPath: /and many others/ssl/postfix/ learnOnly: upright - title: mail-data mountPath: /knowledge - title: fetchmail mountPath: /and many others/fetchmail volumes: - title: dovecot-tls secret: secretName: dovecot-tls - title: mail-data hostPath: route: /choose/mail/knowledge kind: Itemizing - title: fetchmail configMap: title: fetchmail objects: - key: fetchmailrc route: fetchmailrc
our Makefile
postfix: sops -d --output secrets_decrypted/fetchmail.yml secrets and techniques/fetchmail.yml kubectl apply -f secrets_decrypted/fetchmail.yml kubectl apply -f postfix/postfix.yml
and for the fetchmail config
defaults:
timeout 300
antispam -1
batchlimit 100
area postmaster erebe
pollmail.gandi.web
protocol POP3
no envelope
person "your_login" there
with password "xxxx"
is erebe right here
no protect
After I assign a change, I want my personalised pictures to be rebuilt and push robotically in a registry.
To forestall it I rely on a 3rd party, GitHub actions !
#.github/workflows/docker-dovecot.yml title: Post Dovecot Image on: push: paths: - 'dovecot/' jobs: constructAndPush: title: Produce And Push docker pictures runs-on: ubuntu-most up to date steps: - title: Test out the repo makes use of: actions/checkout@v2 - title: Divulge up Docker Buildx makes use of: docker/setup-buildx-action@v1 - title: Login to Github container repository makes use of: docker/login-action@v1 with: registry: ghcr.io username: ${{ github.repository_owner }} password: ${{ secrets and techniques.CR_PAT }} - title: Dovecot identification: docker_build_dovecot makes use of: docker/produce-push-action@v2 with: context: dovecot file: dovecot/Dockerfile push: upright tags: ghcr.io/erebe/dovecot:most up to date - title: Dovecot Image digest plug: echo Dovecot ${{ steps.docker_build_dovecot.outputs.digest }}
When a file beneath dovecot/
is modified for the size of a commit, the Github Actions CI will area off the job that re-produce the docker picture and push it to the GitHub container registry.
I train GitHub container registry in uncover to centralize issues as noteworthy as in all probability and protect a ways from including docker hub as one other exterior dependencies.
The proportion left to assign but, is computerized deployment when a peculiar picture is produce.
Ideally, I might receive to protect a ways from having to retailer my kubeconfig inside GitHub secrets and techniques and code an app that assist net hook in uncover to area off a peculiar deployment. However for now I'm quiet considering of tips on how to assign that accurately, so I'm left to delete manually my pod to re-gain essentially the most up to date picture till then ¯(ツ)/¯
Nextcloud permits you to decide on on up a dropbox/google drive at house and plenty of additional perform whereas you should (caldav, todos, ...). The Net UI is working neatly they usually give moreover huge cell utility for IOs/Android.
With an additional module we are able to mount exterior storage (sftp, ftp, s3, ...) which permits to hold nextcloud as a central level for managing our knowledge.
Warning Whereas you occur to most positive care about storaging your knowledge, procuring for a NAS or paying for DropBox/OneDrive/GoogleDrive thought will seemingly be noteworthy value of your bucks/time.
To deploy nothing love, it is a used deployment with its ingress. Basically essentially the most positive specifities are:
- We add nginx annotation to elongate physique max payload
nginx.ingress.kubernetes.io/proxy-body-size: "10G"
- We override the default configuration of the nginx bundled inside the picture with a ConfigMap in uncover to originate it behave neatly with our ingress
apiVersion: v1 type: ConfigMap metadata: title: nextcloud-nginx-siteconfig knowledge: default: | upstream php-handler { server 127.0.0.1: 9000; } server { hear 8083; hear [::]: 8083; server_name cloud.erebe.eu; ...
The deployment
apiVersion: apps/v1 type: Deployment metadata: title: nextcloud labels: app: nextcloud spec: replicas: 1 technique: kind: Recreate selector: matchLabels: app: nextcloud template: metadata: labels: app: nextcloud spec: hostNetwork: upright dnsPolicy: ClusterFirstWithHostNet containers: - title: nextcloud picture: linuxserver/nextcloud:amd64-version-20.0.0 imagePullPolicy: IfNotCurrent ports: - containerPort: 8083 volumeMounts: - title: knowledge mountPath: /knowledge - title: config mountPath: /config - title: nginx-siteconfig mountPath: /config/nginx/standing-confs volumes: - title: nginx-siteconfig configMap: title: nextcloud-nginx-siteconfig - title: knowledge hostPath: route: /choose/nextcloud/knowledge kind: Itemizing - title: config hostPath: route: /choose/nextcloud/config kind: Itemizing --- apiVersion: v1 type: Carrier metadata: title: nextcloud spec: selector: app: nextcloud ports: - title: http port: 8083 protocol: TCP kind: ClusterIP clusterIP: None --- apiVersion: extensions/v1beta1 type: Ingress metadata: title: nextcloud-ingress annotations: kubernetes.io/ingress.class: "nginx" nginx.ingress.kubernetes.io/ssl-redirect: "upright" nginx.ingress.kubernetes.io/proxy-body-size: "10G" cert-supervisor.io/cluster-issuer: "letsencrypt-prod" spec: tls: - hosts: - cloud.erebe.eu secretName: nextcloud-tls options: - host: cloud.erebe.eu http: paths: - route: / backend: serviceName: nextcloud servicePort: http
My backups are simplistic, as I retailer the whole knowledge beneath /choose
of the host machine and that I'm now not operating any devoted database.
The Backup of the information embody:
- Working a cron-job each evening inside Kubernetes that's spawning a container
- Mounting the whole
/choose
folder inside the container as a amount - Creating a tar of
/choose
- Pushing the tarball to the ftp server that my webhosting firm present me
apiVersion: batch/v1beta1 type: CronJob metadata: title: backup spec: time desk: "Zero 4 " concurrencyPolicy: Forbid successfulJobsHistoryLimit: 1 failedJobsHistoryLimit: 2 jobTemplate: spec: template: spec: hostNetwork: upright dnsPolicy: ClusterFirstWithHostNet containers: - title: backup picture: alpine args: - /bin/sh - -c - apk add --no-cache lftp; tar -cvf backup.tar /knowledge; lftp -u ${USER},${PASSWORD} dedibackup-dc3.on-line.web -e 'assign backup.tar -o /backups/backup_neo.tar; mv backups/backup_neo.tar backups/backup.tar; bye' env: - title: USER valueFrom: secretKeyRef: title: ftp-credentials key: username - title: PASSWORD valueFrom: secretKeyRef: title: ftp-credentials key: password volumeMounts: - title: knowledge mountPath: /knowledge restartPolicy: OnFailure volumes: - title: knowledge hostPath: route: /choose kind: Itemizing
netdata
My subsequent step is to setup a VPN with wireguard to :
- Lift away the choose up admission to of the kube api server from cyber net
- Connect machines (Raspberry Pi) that may now not be reached from cyber net
- Divulge up my Raspberry Pi as straightforward nodes inside the k3s cluster
- Route my net mutter guests in opposition to a secure group when in café, airports, and many others (virtually by no means...)
We're now not going to arrange WireGuard a Kubernetes deployment because it requires a kernel module in uncover to work exactly. Basically essentially the most positive system is to arrange it appropriate away on the host machine !
Apply this handbook in uncover arrange and configure WireGuard for Debian.
Basically essentially the most positive change I made is to be succesful so as to add postUp
and postDown
options to the wg0.conf
in uncover to ahead and masquerade net mutter guests which can be focusing on group open air the VPN. This setup permits me to route all my native machine net mutter guests by the VPN (i.e: When using my telephone) as soon as I want to.
#wg0.conf [Interface] Handle=10.200.200.1/24 ListenPort=995 Non-publicKey=__SERVER_PRIVATE_KEY__ PostUp=iptables -A FORWARD -i %i -j ACCEPT; iptables -A FORWARD -o %i -j ACCEPT; iptables -t nat -A POSTROUTING -o enp1s0 -j MASQUERADE PostDown=iptables -D FORWARD -i %i -j ACCEPT; iptables -D FORWARD -o %i -j ACCEPT; iptables -t nat -D POSTROUTING -o enp1s0 -j MASQUERADE [Peer] PublicKey=__RASPBERRY_PUBLIC_KEY__ AllowedIPs=10.200.200.2/32 [Peer] PublicKey=__PHONE_PUBLIC_KEY__ AllowedIPs=10.200.200.3/32 [Peer] PublicKey=__LAPTOP_PUBLIC_KEY__ AllowedIPs=10.200.200.4/32
On my telephone as an example, to route the whole net mutter guests trough the VPN. I might grasp a setup fancy this one
[Interface] # Client Non-publicKey=xxx Handle=10.200.200.2/32 [Peer] # Server PublicKey=xxxx ## Enable the whole net mutter guests to drift throught the VPN AllowedIPs=0.0.0.0/0
The originate file to automate the deployment of the config
wireguard: sops exec-env secrets and techniques/wireguard.yml 'cp wireguard/wg0.conf secrets_decrypted/; for i in $$(env | grep _KEY | lower -d=-f1); assign sed -i "s#__$${i}__#$${!i}#g" secrets_decrypted/wg0.conf ; completed' ssh ${HOST} "cat /and many others/wireguard/wg0.conf" | diff - secrets_decrypted/wg0.conf || (scp secrets_decrypted/wg0.conf ${HOST}:/and many others/wireguard/wg0.conf && ssh ${HOST} systemctl restart wg-mercurial@wg0) ssh ${HOST} 'systemctl permit wg-mercurial@wg0'
As quickly as in a while is it now not in all probability to attach with my VPN as a consequence of a pair firewalls, as a consequence of Wireguard makes use of UDP net mutter guests and it is a ways now not allowed, or the port 995 (POP3s) I bind it on is forbiden.
To bypass these firewalls and permit me to achieve my private group I train WsTunnel, a websocket tunneling utility that I wrote. Basically, wstunnel leverage Websocket protocol that's using HTTP in uncover to tunnel TCP/UDP net mutter guests by it.
With that, 99.9% of the time I can hook up with my VPN group, on the price of Three layer of encapsulation (knowledge -> WebSocket -> Wireguard -> Ip) 😡
Test the readme for further knowledge
# On the patron wstunnel -u --udpTimeout=-1 -L 1995: 127.0.0.1: 995 -v ws://ws.erebe.eu # to your wg0.conf level the be taught deal with to 127.0.0.1: 995 as an completely different of area.title
On the server, essentially the most attention-grabbing specifity are on the ingress.
apiVersion: apps/v1 type: Deployment metadata: title: wstunnel labels: app: wstunnel spec: replicas: 1 technique: kind: Recreate selector: matchLabels: app: wstunnel template: metadata: labels: app: wstunnel spec: hostNetwork: upright dnsPolicy: ClusterFirstWithHostNet containers: - title: wstunnel picture: erebe/wstunnel:most up to date imagePullPolicy: Continually args: - "--server" - "ws://0.0.0.0: 8084" - "-r" - "127.0.0.1: 995" ports: - containerPort: 8084 --- apiVersion: v1 type: Carrier metadata: title: wstunnel spec: selector: app: wstunnel ports: - protocol: TCP port: 8084 title: http clusterIP: None kind: ClusterIP --- apiVersion: extensions/v1beta1 type: Ingress metadata: title: wstunnel-ingress annotations: kubernetes.io/ingress.class: "nginx" nginx.ingress.kubernetes.io/ssl-redirect: "upright" nginx.ingress.kubernetes.io/proxy-be taught-timeout: "1800" nginx.ingress.kubernetes.io/proxy-ship-timeout: "1800" nginx.ingress.kubernetes.io/connection-proxy-header: "beef up" cert-supervisor.io/cluster-issuer: "letsencrypt-prod" spec: tls: - hosts: - ws.erebe.eu secretName: wstunnel-tls options: - host: ws.erebe.eu http: paths: - route: / backend: serviceName: wstunnel servicePort: http
I want my Raspberry Pi that's residing inside my house group and now not reachable from cyber net to be manageable fancy a straightforward node inside the Kubernetes cluster. For that I might setup Wireguard on my Raspberry Pi and arrange the k3s agent on it.
- Installation Raspbian in your raspberry - Tutorial
- Setup Wireguard on the raspberry - Tutorial
- Configure Wireguard
[Interface] Non-publicKey=xxx ## Client ip deal with ## Handle=10.200.200.2/32 [Peer] PublicKey=xxxx AllowedIPs=10.200.200.0/24 ## Your Debian 10 LTS server's public IPv4/IPv6 deal with and port ## Endpoint=area.title: 995 ## Key connection alive ## PersistentKeep
Similar Products:
- None Found