Running software inside isolated containers is incredibly powerful. Facebook and Google use containers to the extreme, firing up 2 billion containers every week! For a home automation hub, that is a little overkill, however, the benefits of containerisation are equally applicable to the many microservices required in a complex home automation system.
Advertisement Begins
Advertisement End
My home automation hub is a Raspberry Pi 3 (Model B), running a number of containers:
- Home Assistant (central home automation hub, the intelligent heart of the setup)
- Mosquitto MQTT Broker (allowing smart devices to communicate on the network)
- Snips (On device voice control)
- MongoDB (Database server)
- Leanote (Note taking application server)
- Spotify Button Backend (my own Nodejs/Express server, more on that in a future post)
- HA Tool (A home assistant development tool written in AngularJS, Check out the demo)
This is the result of a recent restructuring of my home network architecture—an attempt to simplify the set up, making it easier to maintain, whilst also reducing power consumption and leveraging the Docker technology.
If you wish to benefit from the advantages Docker has to offer, check out my installation guide for Raspberry Pi (includes installation of Docker Compose) {: .notice–info}
The following sections discuss the advantages of running containerized applications using Docker. * Reducing number of physical devices * Making efficient Use of available Hardware Resources * Separation of Concerns, Container Cohesion and Isolation between containers * Ease of deployment * Easy software upgrades
Docker Terminology
Docker uses a few terms with specific meanings:
Dockerfile This is the blueprint used to generate an image. Dockerfiles follow a syntax that allows you to specify a base image, install dependencies, expose ports, and set the command to run inside the container.
Image
Once a Dockerfile is executed using the docker build
command, an image is generated. A new image is generated for each command in the Dockerfile. This allows image caching, which comes in useful when your Dockerfile installs dependencies that do not change when the Dockerfile is built.
Container
A container is the ‘live’ version of an image. We can spin up containers using the docker run <img>
command and manage them using docker stop <container name=""></container>
and docker start <container name=""></container>
Affiliate Content Start
All-new Amazon Kindle Paperwhite (16 GB) – Our fastest Kindle ever, with new 7" glare-free display and weeks of battery life – Black
$134.99 (as of December 22, 2024 06:44 GMT +08:00 - More info)Tile by Life360 Slim (2024) - Wallet Finder, Bluetooth Tracker and Item Locator for Luggage Tags, Passports and More. Both iOS and Android Compatible. Phone Finder. 1-Pack (Black)
$23.98 (as of December 22, 2024 06:44 GMT +08:00 - More info)Wireless Earbuds Bluetooth Headphones 80hrs Playback Ear Buds Power Display with Noise Canceling Mic Headsets for Sports/Workout/Running Black
$24.63 (as of December 22, 2024 06:44 GMT +08:00 - More info)Affiliate Content End
Reducing number of physical devices
The reason I looked into Docker and containerization in the first place was in an attempt to reduce the number of physical computer devices on my network. Each device used to have a single purpose, from Raspberry Pis running an Owncloud instance to Arduino microcontrollers acting as a RF MQTT gateway. When running multiple devices, each device requires their own power supply, or leeches power via a USB port on another device. This adds redundancy and uses lots of electricity as power bricks/transformers get warm whether under load or not.
The other problem was the need to manage and maintain each seperate device. This comes with the inconvenience of having to flash firmware to Arduino’s or having to maintain the same Samba/SSH setup on multiple Raspberry Pis running the same operating system. The disadvantages are summarized below: * Certain services must exist on each device (SSH access, file share) in order to access and maintain the device. Installing and maintaining these services adds additional overhead and configuration duplication. * Maintaining each device, updating software packages and keeping it running smoothly with the rest of the system is time consuming * Power consumption, not just by the device but also the power adapter generating heat. These devices are plugged in and running 24/7. It pays off to think about reducing power consumption. * Aesthetic reasons: space constraints, ugly wires, using up power points, difficult to hide
Docker removes these complications because all services that previously required a dedicated device can be virtualized on one physical machine. This machine runs one operating system (Raspian), one SSH server and exactly one Samba File Share, reducing the overhead of duplicated admin services. Maintaining docker containers is simple too, as I will cover later. This central Docker hub is running on a Raspberry Pi hidden away behind books on a shelf along with the modem.
Advertisement Begins
Advertisement End
Making efficient Use of available Hardware Resources
Running a number of physical devices cannot make efficient use of the available hardware. Say we are running 3 devices, the processors of which run at 10% utilization and RAM is at a similarly low usage rate. From a cost efficiency perspective1, it is better to run a single hardware device at 90% utilization than multiple devices at low utilization. If the hardware has the capability to run multiple containers then it is better to make efficient use of the available hardware resources.
Separation of Concerns and Isolation between containers
The reason containerization reduces maintenance cost (or time required to maintain the setup) is because: 1. Separation of Concerns: Containers have a clearly defined purpose. They provide one service, whether that is a database server or home automation hub. If we require multiple services, we simply start mulitple containers. 2. Isolation: Containers can be treated as black boxes. The interal logic is not relevant to the user2.
Docker containers have clearly defined external interfaces which allow them to communicate with the “outside world”. These interfaces come in the form of file shares and TCP ports exposed to the host machine. It is this restrictive interface and isolation of the container contents which makes Docker containers so powerful. We can treat a container as a black box and all we effectively care about is the service it claims to provide3.
Easy software updates
Kitchen Multi-Timer Pro
Now you’re cooking
Multi Timer Pro is your ultimate meal prep companion, keeping track of multiple cooking times and making adjustments on the fly. Give it a try today and become a better home cook!
Since each container is completely isolated from the host OS, updating the software within a container is as simple as downloading a new image. If your container is based on the image’s :latest
tag, then the latest image is automatically downloaded every time your container restarts.
This elimitates the problems associated with traditional software upgrades, which includes version conflicts, resolving compile errors and installing new dependencies.
A Docker image is a completely self-contained installation, including all dependencies and tools required to run it. All this comes pre-packaged in an image, ready for download and use.
Ease of deployment
Gone are the days of installing databases, development packages, web servers, cache stores and supporting services manually. The result was a single OS that required lots of effort to set up and its services are tightly integrated into the operating system with files spread all over the directory tree. Maintaining or replicating this setup in case of a hardware failure took days.
Docker makes deployment of new software as easy as running a docker run
command, followed by the name of the service to run. This speeds up deployment. Migrating a docker hub to a new machine should be simple as well. Simply run your docker-compose files on the new machine and your docker containers are recreated with the correct container links and dependencies.
MY MISSION
This blog started nearly 10 years ago to help me document my technical adventures in home automation and various side projects. Since then, my audience has grown significantly thanks to readers like you.
While blog content can be incredibly valuable to visitors, it’s difficult for bloggers to capture any of that value – and we still have to work for a living too. There are many ways to support my efforts should you choose to do so:
Consider joining my newsletter or shouting a coffee to help with research, drafting, crafting and publishing of new content or the costs of web hosting.
It would mean the world if gave my Android App a go or left a 5-star review on Google Play. You may also participate in feature voting to shape the apps future.
Alternatively, leave the gift of feedback, visit my Etsy Store or share a post you liked with someone who may be interested. All helps spread the word.
BTC network: 32jWFfkMQQ6o4dJMpiWVdZzSwjRsSUMCk6
Conclusion
We explored the various benefits of containerization in this post and how it reduces not only the time required to maintain a home automation setup but also the cost of running it through more effective hardware utilization. I would be interested to hear what services you are running inside a container? Please share in the comments.