Ever since I started making websites, I have always wanted to think up a great way for friends and family to manage their own blog site without a janky CMS. A system that doesn’t require touching a command line at all and doesn’t have a ton of convoluted steps that have to be followed in a specific order every single time.
I still don’t have a perfect solution, but the setup of this site is getting pretty good. Since writing my post about this space a few days ago, I’ve added a few more abstractions to my workflow to help myself focus as much on the writing as possible. Reducing complexity seems to help a lot of people write more.
Right now, the site runs on a Hugo generated public folder. Locally, I have a Makefile and two executables. The first executable site-dev runs the make targets to clean up, build the site, and let me quickly test it out locally if I want. If it looks good, I open the second executable, deploy-site. This kills the first one and syncs everything over to the server. For normal changes, everything is live within 1s and I can go on with my day.
# deploy-site
killall hugo # tailwindcss-arm64 # if using tailwind cli
cd "$projectDir"
make deploy-site # make build-site then rsync public to server
git add .
read -p "Commit message (Enter for default): " commit_message
if [ -z "$commit_message" ]; then
default_message="Default message: deploy-site script was run. Committing and pushing the uploaded changes."
git commit -m"$default_message"
else
git commit -m"$commit_message"
fi
git push
The site runs on a VPS off of this generated folder, and Caddy automatically reloads the new content after it is received. The Caddy configuration is simple, even to add a backend and multiple other sites, all with automatically configured TLS.
# caddyfile
jplee.me {
handle {
root * /var/www/public
file_server
}
}
As for actually writing the posts, I have an Obsidian vault of my Hugo /content folder. I write all of the posts in ultra basic Markdown with 4 front-matter tags that I just copy from the last post and change the date, title, and tags on. Everything else on the blog post page is rendered automatically.
- Open Obsidian.
- Create a new blog post file.
- Copy the front-matter from the last post and change the title and date. If tags are desired, change or add tags too.
- Write the post content in Markdown.
- Double click to open
deploy-site.
So if you have access to your friend or family member’s computer, this could be a decent option for an awesome site. They’d need to learn Markdown and have to remember where you put the deploy-site executable, and they’re good to go. Unless they end up having any type of infra issue… To solve common infra issues, you could make another executable that does basic restarts. I haven’t run into anything yet that needs such a solution. Or, you could set these executables to just push to a static hosting service.
This setup actually doesn’t require git at all. I do have a git step in the Makefile which gets called when deploying, but my server doesn’t pull changes from git. I decided to go with an IPv6 only host to save $1, and was running into some GitHub issues with cloning and didn’t feel like over-engineering my solution. So, I decided to go with rsync. I found that deployments go through extremely quickly and efficiently for a lightweight blog site like this.
In the future, if GitHub fully resolves #10539, I could set up a webhook to pull after deployments. But honestly, I don’t think it’s worth it at all! I have version control for any historical changes if it’s needed, and deployments are already blazing fast without the need for any accounts or extra tokens on the server.
Photos
After deciding to make a photos page, I wanted to make sure I did a good job. I made some steps to resize and then optimize the jpeg images into webp format. This is one thing about doing everything yourself, there aren’t a lot of auto-optimizations done. To get new photos onto this photo gallery page, I just put them within /img/photos/, and that’s it. In the build step, the photos folder is checked for jpegs and converts them with ffmpeg before deploying the static content.
# jpeg-to-webp
# dir with .jpegs
input_folder="$toOptimize"
# loop the jpegs
for jpeg_file in "$input_folder"/*.jpeg; do
if [ -f "$jpeg_file" ]; then
filename=$(basename "$jpeg_file" .jpeg)
# ffmpeg convert to webp
ffmpeg -i "$jpeg_file" -c:v libwebp "$input_folder/$filename.webp" && rm "$jpeg_file"
fi
done
Another thing I want to add added is an auto-thumbnail generator so I don’t have to do that manually either. So far it is naive, but will be more complex soon, since I also want to maintain the important part of the image which might not be centered. Since I keep the thumbnails in square aspects, this could be is an extra script that crops each photo as a square maintaining the width or height.
# thumbnail.py
def calculate_roi(img):
height, width, _ = img.shape
# check if it is portrait/landscape
if width > height:
roi_size = height
x = (width - roi_size) // 2
y = 0
else:
roi_size = width
x = 0
y = (height - roi_size) // 2
roi = (x, y, roi_size, roi_size)
return roi
## [...] ##
image_files = [f for f in os.listdir(dir) if f.endswith(".jpeg")]
for img in image_files:
input_image = cv2.imread(os.path.join(dir, img))
## [...] ##
roi = calculate_roi(input_image)
## [...] ##
output_file = os.path.join(dir, os.path.splitext(img)[0] + "-thumb.jpeg")
cv2.imwrite(output_file, resized_image)
When deploying or building the site, /img/photos is checked for any jpeg files. If there are any, a thumbnail is created for it with calculate_roi. Then, ffmpeg converts all jpeg formatted images to webp. The photos page loads in all photos within this folder and uses [image-name]-thumb.webp as a thumbnail for faster initial loading.
Additionally, using the gallery shortcode, an identical gallery like the one on the photos page can be generated anywhere while using a different folder of images. Another thing is having these images distributed via CDN. I don’t have one configured quite yet but with that, this jpeg to webp optimization will be extra useful.
This versus a CMS
Having a management system isn’t bad in general, but I like to avoid them for many reasons, such as the added bloat that frequently shows up. With this setup, you own everything locally and can write posts offline or on your phone and ship purely minimal HTML files. This lets you maximize performance and ownership and minimize bloat as much as you want.
Cons
To theme the site themselves, the creator needs to read through the Hugo docs and understand which file does what, and how to use CSS to style each template. Although this is not overly difficult, most people just want a website without thinking about any of this.
If the site is being shipped from scratch by themselves, the creator also has to buy a VPS or set up a static hosting service, and then buy a domain and configure the A, Cname, and nameservers to work with their setup. This is definitely tedious and uninspiring for someone who wants a site but doesn’t care about what’s going on behind the scenes.
Pros
I think with the help of ChatGPT, Claude, etc., a lot of the friction of setting up and managing a personal site is reduced. People who do not know or care about code don’t have to search through the internet and query precisely every time to find exactly what they are looking for through some forum. Within a few tries a LLM might be able to generate exactly what’s needed.
Depending on the aim of the owner with their personal site, I think a simple setup like this can be beneficial over a CMS. Dependencies are reduced and everything is based on “write and click.” Write your post and open the deployment executable.
It can get as complicated as the owner wants. On the simple side, a 2KB file of global CSS and two HTML template files can run the whole site. Going more complex, a curious owner can experiment with adding interactivity, backend services, and monitoring to the site. Regardless, the owner has a lot of control with this type of configuration. Files are local and systems are loosely coupled.

Comments
Loading comments...