Let’s be honest: nobody wakes up excited to deploy an SFTP server.
We prefer APIs, object storage, event streams, and managed integrations. Unfortunately, real systems are messy. Eventually, you will deal with a third party that can only push files via SFTP, often for large, sensitive, and time-boxed migrations.
That was my situation. I needed to ingest thousands of sensitive PDFs from an external contractor with strict constraints:
- No access to production systems
- No shell access for users
- Strong cryptography only
- Complete audit logs for every connection and file operation
- Easy teardown once the job was done
The fastest way to meet those requirements without creating a long-term maintenance liability was to containerize the SFTP server and treat it as disposable infrastructure.
This post documents the exact approach, trade-offs included.
The problem with “just installing SFTP on a VM”
Installing OpenSSH directly on a host and configuring SFTP users works, but it scales poorly from an operational and security perspective.
In practice, you end up with:
- Complex
sshd_configrules and fragilechrootsetups - Manual user management on a mutable system
- Hard-to-reason permission boundaries
- Logs scattered between system and application contexts
- Painful cleanup once the migration is complete
This is survivable for one-off servers, but it’s not repeatable or auditable easily.
Why Docker is the right abstraction here
Using Docker doesn’t magically make SFTP secure, but it does give us hard boundaries that are otherwise tedious to maintain.
What we gain:
- Process isolation: Even if something goes wrong inside the SFTP service, the blast radius is confined to the container.
- Immutable configuration: The entire server definition becomes code. If something drifts, we redeploy instead of debugging state.
- Fast teardown: When the migration is finished, we stop the container and securely delete the attached disk. No ghosts left behind.
- Clear separation of concerns: The host focuses on hardening and observability. The container focuses on SFTP only.
This aligns well with a DevSecOps mindset: short-lived, auditable infrastructure with minimal privileges.
Architecture overview
At a high level, the design is intentionally boring:
Core components:
- Runtime: Docker running atmoz/sftp, a minimal OpenSSH-based image that wraps OpenSSH in an SFTP-only configuration.
- Storage: An encrypted persistent disk mounted into the container as a volume.
- Network: A locked-down VPC allowing only:
- TCP 22 (host SSH administration)
- TCP 2222 (SFTP traffic)
- Observability: System logs + container logs forwarded to centralized logging.
The result is a clearly defined upload zone with explicit ingress and full traceability.
Implementation
1. Host provisioning and baseline hardening
Before running Docker, the VM itself must be boring, patched, and hostile by default.
I typically use Ubuntu 24.10 LTS or similar—stable, predictable, and well supported.
Key hardening steps:
- Key-only SSH access
- Disable password authentication
- Disable direct root login
- Enforce modern ciphers
- Firewall (UFW)
- Default deny all incoming traffic
- Explicitly allow:
22/tcp(administration)2222/tcp(SFTP)
- Brute-force protection
- Install and configure Fail2Ban to automatically block repeated authentication failures.
- Aggressive limits on failed attempts
- Automatic security updates
- Enable
unattended-upgrades - No manual patch windows
- Enable
Example Fail2Ban jail targeting the containerized SFTP port:
[sftp-with-docker]
enabled = true
port = 2222
filter = sshd
action = ufw[application="OpenSSH", blocktype=reject]
logpath = /var/log/auth.log
maxretry = 3
This catches both host-level SSH abuse and containerized SFTP attempts.
For further reading on securing Linux environments, the book Deployment from Scratch offers excellent guidelines for hardening systems in compliance with security standards.
2. Container configuration (where most mistakes happen)
We use the atmoz/sftp image because it does one thing well: run OpenSSH in SFTP-only mode.
The critical detail is how the container is launched.
docker run \
-d \
--restart unless-stopped \
-p 2222:22 \
-v /home/ftp_server/disks/data/users/client_a/upload:/home/client_a/upload \
-v /home/ftp_server/config/ssh_host_ed25519_key:/etc/ssh/ssh_host_ed25519_key \
atmoz/sftp \
client_a::1001
Why this matters:
- Persistent data volume: Uploaded files survive container restarts and redeployments.
- Persistent SSH host keys: This is non-negotiable. Without it, every redeploy changes the server fingerprint and breaks automation with scary MITM warnings.
- No shell access:: The container enforces
internal-sftponly. Users never get a shell, even if credentials leak.
The sshd_config Configuration
Launching the container is only half the battle. By default, OpenSSH can be too permissive. To ensure users are truly restricted to an SFTP “jail” and cannot use tricks like tunneling or X11 forwarding, we inject a custom, minimalist SSH configuration (sshd_config).
This file acts as the last line of defense inside the container:
# Secure defaults
# Based on [https://github.com/atmoz/sftp/blob/master/files/sshd_config](https://github.com/atmoz/sftp/blob/master/files/sshd_config)
Protocol 2
HostKey /etc/ssh/ssh_host_ed25519_key
HostKey /etc/ssh/ssh_host_rsa_key
# Faster connection
# See: [https://github.com/atmoz/sftp/issues/11](https://github.com/atmoz/sftp/issues/11)
UseDNS no
# Limited access
PermitRootLogin no
X11Forwarding no
AllowTcpForwarding no
# Force sftp and chroot jail
Subsystem sftp internal-sftp -f AUTH -l VERBOSE
ForceCommand internal-sftp -f AUTH -l VERBOSE
ChrootDirectory %h
# Global verbosity
LogLevel VERBOSE
# Enforce both password and public key authentication
AuthenticationMethods publickey
PubkeyAuthentication yes
PasswordAuthentication no
ChallengeResponseAuthentication no # Disable keyboard-interactive authentication
Here is the breakdown of why these lines are critical for your security:
- **
Protocol 2&HostKey**: We force the use of the modern SSH protocol and explicitly define where to look for the server identity keys (which we persist via Docker volumes). UseDNS no: A critical performance optimization. It prevents the server from trying to resolve the client’s hostname upon connection, which often causes annoying login delays.- **
PermitRootLogin,X11Forwarding,AllowTcpForwarding**: We disable root login and block any attempt to use the server to tunnel traffic or forward graphical interfaces. If an attacker gets in, they cannot pivot to other systems from here. - **
ForceCommand internal-sftp&ChrootDirectory %h**: This is the “magic” of isolation. It forces the process to use the internal SFTP subsystem (no need for external shell binaries) and locks (chroots) the user into their home directory. They cannot “go up” to view system files. AuthenticationMethods publickey: Zero password policy. We only accept public key authentication, eliminating the risk of brute-force attacks on weak passwords.
3. SSH key management (the most common support issue)
We enforce key-based authentication only. Passwords alone are not acceptable for this type of data.
This introduces a recurring problem with non-Unix clients.
The failure mode:
- Users generate keys with PuTTYgen (Windows)
- Keys are in SSH2 / RFC 4716 format
- OpenSSH rejects them with
invalid format
The fix:
- Convert keys to OpenSSH format
- Validate them before deploying
Operational rule: When a key changes, recreate the container. The image reads keys at startup. Restarting is faster than debugging permissions.
4. Logging and auditability
For sensitive data transfers, “it works” is irrelevant. You need to answer who did what, when, and from where.
This setup captures:
-
Host authentication logs
/var/log/auth.log- Failed and successful login attempts
-
SFTP activity logs
- Uploads
- Downloads
- Deletions
- Directory changes
-
Centralized aggregation
- Forwarded via a logging agent (for example, Google Cloud Ops Agent)
Once centralized, you can:
- Build alerts for anomalous behavior
- Retain logs independently of the VM lifecycle
- Satisfy audit and compliance requirements without guesswork
Operational trade-offs (and why they’re acceptable)
No setup is free. These are intentional constraints:
- No interactive shell: Users cannot SSH in to run
lsormv. They are restricted to SFTP commands only. This is a security feature, not a bug, this removes an entire class of escalation risks - Single-purpose server: No FTP, FTPS, SCP, or extra services. Smaller attack surface, simpler reasoning
- Single Tenant Directory: By default, users can’t see each other. If you need to move files between users (e.g., from an “Upload” user to a “Download” user), you’ll need a cron job on the host to sync folders using
rcloneorrsync. - Time Synchronization: Ensure NTP is running on the host. Accurate timestamps are critical for audit logs and certificate validation in regulated industries.
- Time-boxed lifespan: When the job is done, shut it down, delete disks and revoke keys.
What I’d improve next
If this server were long-lived, I’d consider:
- Infrastructure-as-Code for VM provisioning
- Ephemeral IP allowlists with automation
- Object storage ingestion instead of filesystem writes
For migrations and legacy integrations, though, this approach hits the sweet spot: secure, auditable, and easy to dismantle.
Closing thoughts
If you’re still hand-crafting SFTP users on mutable servers, you’re paying a long-term tax for a short-term problem.
Containerizing SFTP turns it into a disposable utility: predictable, isolated, and easy to reason about.
If you’ve deployed SFTP differently, or see a flaw in this approach, I’d genuinely like to hear about it. Leave a comment.
And if this saved you time, don’t be afraid to reuse it.