Home Networking

The Domain Name System (DNS) has been and still is mostly a mystery to me. What the heck are A records and cnames? I know it doesn’t work well for my homelab servers. I do appreciate the convenience of remembering and referencing a computer name versus a four-octet number (IP address).

On a network, when references to computer names are not reliable, fixed IP addresses are a requirement. An easy but unmanageable solution is to put server names and IP addresses in a hosts file on each computer and to hard code static IP addresses on each computer. Using permanent IP reservations on the DHCP (dynamic host configuration protocol) server is a big improvement, but name resolution is still spotty. If you throw in a Microsoft computer on you network, it has a separate name resolution process – WINS (Windows internet name service). This can help for computers that talk “Microsoft” that are either Windows or have SAMBA (server message block networking protocol) installed (Linux computers).

For convenience, I use my AT&T modem/router’s DHCP server to dole out IP addresses. It provides permanent IP address reservations which is essential for servers. When I embarked on the hypervisor/virtual machine endeavor, I saw the benefit of installing one service on one server. Therefore, MQTT, Plex, Pihole and other services each have their own server. I learned this lesson the hard way. As I busily doled out servers, each with it’s own IP address reservation, I quickly hit the limit of 16 on my DHCP server.

I dislike the inconvenience of managing static IP addresses. However, fixed IP address for servers are really importantI’m hoping Pihole

I need to update my DNS entry for lynnhargrove.com at NameCheap.

DNS for Internet Access to my HomeLab

I’m ready to share some of my homelab services outside my home network. Personal photos and home videos are near the top of the list. With this in mind, I got a domain name from NameCheap – lynnhargrove.com. The Domain Name System (DNS) will convert my domain name to my ip address. I used this video to point lynnhargrove.com to my router. I don’t want to get caught with my shorts down so I used the port scanner from Gibson Research to see what ports are open into my network. My router did not respond to any of it’s probes. That’s good.

As a test,

Since I get a dynamic ip address from my internet provider, it can change periodically. That means I need to check occasionally and update my ip address when it changes. Dynamic DNS (DDNS) is the service that does this and ddclient is the program that makes it happen. I found this setup and installed ddclient on a server that is always on. It uses sendmail for success and failure notifications so I installed sendmail.

ProxMox Upgrade

I have experience with KVM/Qemu on an Ubuntu server. ProxMox puts a solid frontend on KVM/Qemu. It is much easier to administer. After installing ProxMox for HomeAssist, I couldn’t stop adding useful VMs until I ran out of space.I shouldn’t have been surprised that I would run out of space on both systems. Then I decided to install ProxMox on my Plex server and make Plex a guest. I will use the second ProxMox (ProxMox2) server when I travel in my RV. Both systems are Lenovo M73 tiny computers that are easy to port and don’t require much power. I will migrate Proxmox from a 250GB SSD to a new 500GB SSD and ProxMox2 from a 120GB SSD to the 250GB SSD.

As usual, I searched for a good reference for the migration. I founda good one at OSTechNix and another one at Craft Computing.. I plugged a 500GB USB drive into ProxMox2 and mounted the drive using the node shell. My strategy is to backup all nodes on the original ProxMox cluster to an external drive and restore them to the new Proxmox cluster. It turned out to be surprisingly easy.

Heimdall Dashboard

Update 2

I used the Heimdall install which also installs docker before installing Heimdall.

Install Redux

I used the Pi My Life Up procedure to install the docker version of Heimdall. It required installation of docker on my Ubuntu 22.04 server.

Original Install

I did the Docker install that worked until I rebooted. I started over using the Variable How-Tos tutorial. This procedure installed PHP 7.4.3 but PHP 7.4.32 was required. I found the update process using the DigitalOcean procedure. It added the Personal Package Archives:ondrej/php or ppa:ondrej/php repository so I could include it as an additional software source. Then I was able to update/upgrade to php 7.4.33. I also had to install php7.4-cli to use the PHP command line.

I edited /etc/systemd/system/heimdall.service adding the suggest configuration. I changed the user, group and working directory to mine. I changed the –port to 8443 and added — host 0.0.0.0. The last catch was to not use https in the url: http://<ip addr>:8443. I think my previous Heimdall bookmark failed because I used https. Hopefullly, a lesson learned. This is my work in progress:

Convert Physical Windows 10 to Virtual Machine

My ProxMox computer is a Lenovo M73 Tiny and is licensed for Windows 10 Pro. I want to use this Windows license for a VM on the same hardware. I upgraded the CPU but I don’t think that’s an issue with Windows licensing. I did try to install a new copy of Windows as a VM without the product key. I hoped this would activate as a reinstallation on the same hardware but it didn’t work. Next up is to try cloning the physical drive and importing it to ProxMox.

Login to Linux Server with SSH Key File

This assumes that PuTTY is used for SSH access to the servers.

The first step is to generate a public/private key pair. PuTTY Key Generator will do this. I used all defaults and clicked “Generate”. I “saved the public key” and “saved the private key”. The public key is concatenated to /user/home/.ssh/authorized_keys on the server. If it’s the first key, you’ll need to create this hidden directory and the authorized_keys file. The access permissions (chmod) are 700 for the .ssh directory and 600 for the authorized_keys file. It’s odd that the saved public key file is not the right format for concatenation to the authorized_keys file. Instead you must copy and paste the key from the Putty Key Generator Key. It’s identified as the public key for pasting into OpenSSH authorized_keys file. This is easiest done with a SSH session to the server.

Next, open the PuTTY entry for the server and add the user name under “Connection/Data”. Add the private key file to the Connection/SSH/Auth. It’s best to avoid moving the private key file around. Save the session.

The last step is to accept ssh-rsa public key algorithms on the server. Do this by editing /etc/ssh/sshd_config and adding “PubkeyAcceptedAlgorithms +ssh-rsa” to the bottom of the file. Restart ssh service.

Briefly

mkdir .ssh
chmod 700 .ssh
nano .ssh/authorized_keys
<paste public key>
sudo nano /etc/ssh/sshd_config
PubkeyAcceptedAlgorithms +ssh-rsa
sudo systemctl restart ssh.service
add user name and private key to PuTTY

Troubleshooting

There are many tutorials on using SSH key files to log into a Linux server without a password. None of them worked for me. Here’s the problem I found and how I found the solution.

The error was “Server refused our key”. This led me to adding “LogLevel DEBUG3” in /etc/ssh/sshd_config. Then used the command “journalctl | grep key” or “journalctl -xe” to locate any issues when logging in with a private key. (The public key is stored on the server.) I found that “key type ssh-rsa not in PubkeyAcceptedAlgorithms” for the server. My key was an rsa key. I removed “LogLevel DEBUG3” from sshd_config and added “PubkeyAcceptedAlgorithms +ssh-rsa”. This fixed the problem. Oddly, this failed and was not needed for Raspbian. The mystery remains why none of the tutorials warned of this problem.