The Perfect Dockerfile: Turning a Hobbyist Build into a Professional Tool
Published on 2026-01-12
Write a Dockerfile simply: FROM node, COPY ., CMD run. It works, and for local tests this is often enough. But when such an image reaches CI/CD or, God forbid, production, problems begin: builds take forever, the image weighs gigabytes, and the security team grabs their heads.
The difference between “it works” and “it works correctly” is huge. Let’s go through four levels of optimization that separate a hobbyist hack from a reliable engineering solution.
1. Foundation: Choosing the base image and determinism
Everything starts with the FROM instruction. Many out of habit take full images (for example, the standard ubuntu or python:3.9) without thinking about the consequences.
Problem: Full OS images pull in hundreds of megabytes of “junk”: curl, vim, systemd. These utilities are not needed by your microservice, but they increase download time and, importantly, create a large attack surface.
What to choose?
Alpine Linux: The king of lightweights (around 5 MB). Ideal for Go or static binaries.
Important: Alpine uses the
musllibrary instead of the standardglibc. If you write in Python or C++, this can cause compatibility or performance issues. Test it!Slim variants: (for example,
debian:bullseye-slim). The same Debian but cleaned of manuals and unnecessary packages. It includesglibc, making it the “golden mean” for most applications.Distroless: The high art from Google. These images don’t even include a shell (
sh).- Plus: An attacker won’t be able to run any commands inside the container.
- Minus: You also won’t be able to enter it for debugging (
docker execwon’t work).
No :latest
Never use the latest tag in production.
- Risk: Tomorrow a new Node.js or Python version with breaking changes will be released. Your CI will automatically pull it, and production will fail.
- Solution: Pin versions. Use
node:18.16.0-alpineto ensure determinism: the build should produce the same result today and in a year.
2. Build optimization: Caching and context
A Docker image is a layered cake. The cardinal rule of caching: if one layer changes, all subsequent layers are rebuilt from scratch.
.dockerignore is not just a whim
Analogous to .gitignore, this file prevents sending “trash” (the .git folder, node_modules, temporary logs) to the Docker daemon.
- Why: Speeds up the start of the build (less context to transfer) and protects your secrets from accidentally ending up in the image.
The order of commands decides everything
A common mistake of beginners is copying the code before installing dependencies.
❌ Bad (cache is invalidated on any code change):
COPY . .
RUN npm install # Эта тяжелая операция будет выполняться каждый раз!
✅ Good (smart caching):
COPY package.json package-lock.json ./
RUN npm install # Выполняется только если изменились зависимости
COPY . . # Копируем код. Если поменяли запятую в коде, npm install не запустится заново.
Atomic layers
Each RUN instruction creates a new layer.
Tip: Combine update, install, and cache cleanup commands with
&&. This prevents deleted files from being carried into the final image.
RUN apt-get update && apt-get install -y \
python3-dev \
&& rm -rf /var/lib/apt/lists/*
3. Security and secret management
No to god privileges
By default Docker runs processes as root. If an attacker finds a vulnerability in your application and performs a container breakout, they will get root privileges on the host machine.
Solution: Always create a user and switch to it.
RUN addgroup -S appgroup && adduser -S appuser -G appgroup
USER appuser
Secrets are not for ENV
Never pass passwords or API keys via ARG or ENV. Environment variables are permanently “baked” into the image layer history (docker history will show them to anyone).
Solution: Use BuildKit Secrets. It works like a temporary “flash drive” attached only during the build.
# Пример использования секрета при сборке
RUN --mount=type=secret,id=my_token \
cat /run/secrets/my_token | pip install -r private-requirements.txt
4. Advanced techniques: Level Up
Multi-stage Builds
This is the main best practice for compiled languages (Go, Java, Rust, C++), and for frontend too. The idea: in the first (heavy) image you compile the code, and in the second (clean) one you copy only the binary.
- Result: The image weighs 15 MB instead of 1 GB. All source code and compilers stay out.
PID 1 and Graceful Shutdown
Orchestrators (Kubernetes) communicate with containers via signals (for example, SIGTERM for stopping).
If your application is started through a shell (for example, npm start), it might not receive that signal because sh doesn’t forward signals to child processes. As a result Kubernetes will kill the pod hard (SIGKILL), which can lead to data loss or interrupted transactions.
Solution:
- Use the exec form in CMD:
CMD ["node", "server.js"]. - Use
tini— a tiny init process that properly handles signals.
Conclusion
An ideal Docker image stands on three pillars:
- Speed (optimal cache and small size).
- Security (non-root user, absence of unnecessary utilities, correct secret handling).
- Reliability (deterministic version tags).
To avoid keeping all these rules in your head, embed hadolint into your CI pipeline. It’s a static analyzer for Dockerfiles that will “slap your hands” for syntax errors and best-practice violations before the image starts building.
Related reviews
There were several issues concerning both the technical side and overall understanding. Mikhail responded quickly, resolved the technical problems, and helped me understand them — many thanks. I'm satisfied with the result.
abazawolf · VPS setup, server setup
2026-02-18 · ⭐ 5/5
There were several issues concerning both the technical side and overall understanding. Mikhail responded quickly to the request, helped sort things out and resolved the technical problems and helped clarify understanding, for which a special thank you. I am satisfied with the result.
Everything was done quickly and efficiently. I recommend.
Akelebra · VPS setup, server setup
2026-01-17 · ⭐ 5/5
Everything was done quickly and efficiently. I recommend.
Everything went well; the contractor responded quickly to questions and helped resolve the issue. Thanks!
visupSTUDIO · VPS setup, server setup
2025-12-16 · ⭐ 5/5
Everything went well, the contractor responded quickly to questions and helped resolve the issue. Thank you!
Everything was done promptly. We'll use them again. Highly recommend!
rotant · VPS setup, server setup
2025-12-10 · ⭐ 5/5
Everything was done promptly. We'll continue to use their services. I recommend!
Everything was done promptly. Mikhail is always available. We'll continue to contact him.
samstiray · VPS setup, server setup
2025-12-10 · ⭐ 5/5
Everything was done promptly. Mikhail is always available. We'll continue to reach out
Mikhail is a professional! He's shown this in practice more than once.
Vadim_U · VPS setup, server configuration
A settled customer2025-12-03 · ⭐ 5/5
Mikhail, a professional! Not the first time he's demonstrated this in practice.