A self-contained Docker Compose lab that spins up a full TantoC2 deployment with multiple network segments, targets, and relay infrastructure for hands-on testing.
Prerequisites
#| Requirement | Notes |
|---|
| Docker | Engine 20.10+ recommended |
| Docker Compose | v2 plugin (docker compose) |
| make | Used by the build scripts |
| sshpass | Required by start.sh to auto-SSH into the CLI container |
Ensure the Docker daemon is running and your user is in the docker group (or use sudo).
Quick Start
#From the user-testing/ directory:
This single command:
- Runs
build.sh to build all Python wheels (main project + plugins) - Starts all Docker Compose containers
- SSHes into the CLI container (root:tantoc2 on port 2222)
- Tears down all containers when you exit the SSH session
You will land in the TantoC2 CLI shell, ready to operate.
Network Topology
#The lab creates five Docker networks that simulate distinct network segments:
| Network | Subnet | Purpose |
|---|
| c2 | 10.10.0.0/24 | Operator-to-server (internal) |
| wan | 172.20.0.0/24 | WAN segment for agent callbacks |
| lan | 172.21.0.0/24 | Internal LAN, isolated from WAN |
| external | 172.22.0.0/24 | External segment via socat relay |
| dmz | 172.23.0.0/24 | DMZ segment via nginx redirector |
Container Map
#| Container | IP Address(es) | Networks | Role |
|---|
| cli | 10.10.0.10 | c2 | TantoC2 CLI, SSH on :2222 |
| server | 10.10.0.20, 172.20.0.10 | c2, wan | Teamserver, API on :8443 |
| web | 10.10.0.11 | c2 | Web UI on :8080 |
| target1 | 172.20.0.20, 172.21.0.10 | wan, lan | Dual-homed SSH target + relay |
| target2 | 172.21.0.20 | lan | LAN-only target (no direct route to server) |
| socat | 172.20.0.30, 172.22.0.10 | wan, external | TCP relay forwarder |
| target3 | 172.22.0.20 | external | External segment target |
| redirector | 172.20.0.40, 172.23.0.10 | wan, dmz | nginx HTTP proxy |
| target4 | 172.23.0.20 | dmz | DMZ target |
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
| +-----------+
| cli | 10.10.0.10
+-----+-----+
| c2 (10.10.0.0/24)
+-----+-----+ +-----+-----+
| server | | web |
| 10.10.0.20| | 10.10.0.11|
| 172.20.0.10 +-----------+
+-----+-----+
| wan (172.20.0.0/24)
+---------------+---------------+---------------+
| | | |
+-----+-----+ +-----+-----+ +------+-----+ +-----+------+
| target1 | | socat | | redirector | | |
| 172.20.0.20| | 172.20.0.30 | 172.20.0.40| | |
| 172.21.0.10| | 172.22.0.10 | 172.23.0.10| | |
+-----+------+ +-----+-----+ +-----+------+ | |
| | | | |
lan | external| dmz | | |
(172.21 | (172.22) | (172.23)| | |
+-----+-----+ +------+-----+ +------+-----+ | |
| target2 | | target3 | | target4 | | |
| 172.21.0.20| | 172.22.0.20| | 172.23.0.20| | |
+-----------+ +------------+ +------------+ + +
|
A visual topology diagram is also available locally — run python3 serve_topology.py in the user-testing/ directory and open http://localhost:9999.
Test Scenarios
#1. Direct Callback
#The simplest scenario. An agent on target1 calls back directly to the teamserver over the WAN network.
1
| target1 (172.20.0.20) ──TCP──▶ server (172.20.0.10)
|
- Create a TCP listener on the server
- Deploy an agent to target1 pointed at 172.20.0.10
- The agent checks in over the WAN segment
2. P2P Relay (Pivot)
#Tests peer-to-peer relay through a dual-homed host. target2 is LAN-only and cannot reach the server directly. An agent on target1 relays traffic for an agent on target2.
1
| target2 (172.21.0.20) ──P2P──▶ target1 (172.21.0.10) ──TCP──▶ server (172.20.0.10)
|
- Deploy an agent to target1 first (direct callback)
- Use the agent on target1 as a relay
- Deploy a second agent to target2 pointed at target1’s LAN IP
- target2’s traffic tunnels through target1 to reach the server
3. Redirector
#Tests traffic flowing through an nginx HTTP redirector in the DMZ. The redirector proxies HTTP requests to the teamserver.
1
| target4 (172.23.0.20) ──HTTP──▶ redirector (172.23.0.10) ──HTTP──▶ server (172.20.0.10)
|
- Create an HTTP listener on the server
- Deploy an agent to target4 pointed at the redirector’s DMZ IP
- nginx forwards the callback to the server
4. External (Socat Relay)
#Tests traffic routed through a socat TCP relay on the external network segment.
1
| target3 (172.22.0.20) ──TCP──▶ socat (172.22.0.10) ──TCP──▶ server (172.20.0.10)
|
- Create a TCP listener on the server
- Deploy an agent to target3 pointed at socat’s external IP
- socat forwards the TCP connection to the server
Automated Setup
#The scripts/setup.py script automates engagement creation, listener setup, and agent deployment for any scenario.
1
2
3
4
5
6
7
8
9
10
11
12
| # Inside the CLI container:
# Run a single scenario
python3 scripts/setup.py --scenario direct
# Available scenarios
python3 scripts/setup.py --scenario pivot
python3 scripts/setup.py --scenario redirector
python3 scripts/setup.py --scenario external
# Run all scenarios at once
python3 scripts/setup.py --scenario full
|
Each scenario will:
- Create an engagement (or reuse the existing one)
- Start the appropriate listener(s)
- Build and deploy agent(s) to the target(s)
- Wait for check-in confirmation
Seeding Test Files
#To populate targets with sample files for testing file transfer and collection modules:
1
| python3 scripts/seed_files.py
|
This creates test files across the target containers for use with file browser and download operations.
Manual Walkthrough
#If you prefer to set things up by hand, here is the general flow after SSHing into the CLI container.
Connect and Authenticate
#1
2
3
| tantoc2> connect https://server:8443
tantoc2> login admin
Password: tantoc2
|
Create an Engagement
#1
2
3
| tantoc2> engagements create testing-lab
Engagement passphrase: testpass
tantoc2> engagements use <engagement-id>
|
Start a Listener
#1
2
| tantoc2[testing-]> listeners create --protocol tcp --bind-port 4444
tantoc2[testing-]> listeners list
|
Build and Deploy an Agent
#1
2
| tantoc2[testing-]> agents build --listener <listener-id> --os linux --arch amd64
tantoc2[testing-]> agents list-builds
|
Copy the built agent to target1 and execute it:
1
2
3
| # From another terminal or via SSH:
sshpass -p target ssh [email protected]
# Transfer and run the agent binary on the target
|
Interact with the Agent
#1
2
3
4
5
| tantoc2[testing-]> agents list
tantoc2[testing-]> agents use <agent-id>
tantoc2[testing-][agent]> shell whoami
tantoc2[testing-][agent]> shell hostname
tantoc2[testing-][agent]> shell ip addr
|
Set Up a P2P Relay
#With an active agent on target1:
1
| tantoc2[testing-][agent]> relay start --bind-port 5555
|
Then deploy a second agent on target2, configured to call back to target1 at 172.21.0.10:5555.
Credentials Reference
#| Service | Username | Password | Access |
|---|
| Teamserver API | admin | tantoc2 | https://server:8443 |
| CLI SSH | root | tantoc2 | ssh -p 2222 root@localhost |
| Web UI | admin | tantoc2 | http://localhost:8080 |
| Target hosts | root | target | SSH to any target container |
Reset and Cleanup
#Full Reset
#Wipe everything and start fresh:
This tears down all containers, removes volumes, and restarts the entire stack.
Manual Teardown
#Rebuild After Code Changes
#If you have modified TantoC2 source code and need to rebuild:
1
2
| ./build.sh # Rebuild wheels
docker compose up -d --build # Recreate containers with new images
|
Scripts Reference
#| Script | Purpose |
|---|
start.sh | One-command setup: build, start, SSH in, teardown on exit |
build.sh | Build all Python wheels (main project + plugins) |
reset.sh | Wipe all containers/volumes and restart |
scripts/setup.py | Automated engagement/listener/agent setup per scenario |
scripts/seed_files.py | Create test files on target containers |
topology.html | Local network topology diagram (serve with serve_topology.py) |