Skip to main content
  1. Documentation/
  2. Guides/

User Testing Environment

Table of Contents
A self-contained Docker Compose lab that spins up a full TantoC2 deployment with multiple network segments, targets, and relay infrastructure for hands-on testing.

Prerequisites
#

RequirementNotes
DockerEngine 20.10+ recommended
Docker Composev2 plugin (docker compose)
makeUsed by the build scripts
sshpassRequired by start.sh to auto-SSH into the CLI container

Ensure the Docker daemon is running and your user is in the docker group (or use sudo).

Quick Start
#

From the user-testing/ directory:

1
./start.sh

This single command:

  1. Runs build.sh to build all Python wheels (main project + plugins)
  2. Starts all Docker Compose containers
  3. SSHes into the CLI container (root:tantoc2 on port 2222)
  4. Tears down all containers when you exit the SSH session

You will land in the TantoC2 CLI shell, ready to operate.

Network Topology
#

The lab creates five Docker networks that simulate distinct network segments:

NetworkSubnetPurpose
c210.10.0.0/24Operator-to-server (internal)
wan172.20.0.0/24WAN segment for agent callbacks
lan172.21.0.0/24Internal LAN, isolated from WAN
external172.22.0.0/24External segment via socat relay
dmz172.23.0.0/24DMZ segment via nginx redirector

Container Map
#

ContainerIP Address(es)NetworksRole
cli10.10.0.10c2TantoC2 CLI, SSH on :2222
server10.10.0.20, 172.20.0.10c2, wanTeamserver, API on :8443
web10.10.0.11c2Web UI on :8080
target1172.20.0.20, 172.21.0.10wan, lanDual-homed SSH target + relay
target2172.21.0.20lanLAN-only target (no direct route to server)
socat172.20.0.30, 172.22.0.10wan, externalTCP relay forwarder
target3172.22.0.20externalExternal segment target
redirector172.20.0.40, 172.23.0.10wan, dmznginx HTTP proxy
target4172.23.0.20dmzDMZ target
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
                    +-----------+
                    |    cli    |  10.10.0.10
                    +-----+-----+
                          |  c2 (10.10.0.0/24)
                    +-----+-----+       +-----+-----+
                    |  server   |       |    web    |
                    | 10.10.0.20|       | 10.10.0.11|
                    | 172.20.0.10       +-----------+
                    +-----+-----+
                          |  wan (172.20.0.0/24)
          +---------------+---------------+---------------+
          |               |               |               |
    +-----+-----+  +-----+-----+  +------+-----+  +-----+------+
    |  target1   |  |   socat   |  | redirector |  |            |
    | 172.20.0.20|  | 172.20.0.30  | 172.20.0.40|  |            |
    | 172.21.0.10|  | 172.22.0.10  | 172.23.0.10|  |            |
    +-----+------+  +-----+-----+  +-----+------+  |            |
          |               |               |          |            |
   lan    |        external|          dmz  |          |            |
  (172.21 |       (172.22) |       (172.23)|          |            |
    +-----+-----+  +------+-----+  +------+-----+   |            |
    |  target2   |  |   target3  |  |   target4  |   |            |
    | 172.21.0.20|  | 172.22.0.20|  | 172.23.0.20|   |            |
    +-----------+   +------------+  +------------+    +            +

A visual topology diagram is also available locally — run python3 serve_topology.py in the user-testing/ directory and open http://localhost:9999.

Test Scenarios
#

1. Direct Callback
#

The simplest scenario. An agent on target1 calls back directly to the teamserver over the WAN network.

1
target1 (172.20.0.20) ──TCP──▶ server (172.20.0.10)
  • Create a TCP listener on the server
  • Deploy an agent to target1 pointed at 172.20.0.10
  • The agent checks in over the WAN segment

2. P2P Relay (Pivot)
#

Tests peer-to-peer relay through a dual-homed host. target2 is LAN-only and cannot reach the server directly. An agent on target1 relays traffic for an agent on target2.

1
target2 (172.21.0.20) ──P2P──▶ target1 (172.21.0.10) ──TCP──▶ server (172.20.0.10)
  • Deploy an agent to target1 first (direct callback)
  • Use the agent on target1 as a relay
  • Deploy a second agent to target2 pointed at target1’s LAN IP
  • target2’s traffic tunnels through target1 to reach the server

3. Redirector
#

Tests traffic flowing through an nginx HTTP redirector in the DMZ. The redirector proxies HTTP requests to the teamserver.

1
target4 (172.23.0.20) ──HTTP──▶ redirector (172.23.0.10) ──HTTP──▶ server (172.20.0.10)
  • Create an HTTP listener on the server
  • Deploy an agent to target4 pointed at the redirector’s DMZ IP
  • nginx forwards the callback to the server

4. External (Socat Relay)
#

Tests traffic routed through a socat TCP relay on the external network segment.

1
target3 (172.22.0.20) ──TCP──▶ socat (172.22.0.10) ──TCP──▶ server (172.20.0.10)
  • Create a TCP listener on the server
  • Deploy an agent to target3 pointed at socat’s external IP
  • socat forwards the TCP connection to the server

Automated Setup
#

The scripts/setup.py script automates engagement creation, listener setup, and agent deployment for any scenario.

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
# Inside the CLI container:

# Run a single scenario
python3 scripts/setup.py --scenario direct

# Available scenarios
python3 scripts/setup.py --scenario pivot
python3 scripts/setup.py --scenario redirector
python3 scripts/setup.py --scenario external

# Run all scenarios at once
python3 scripts/setup.py --scenario full

Each scenario will:

  1. Create an engagement (or reuse the existing one)
  2. Start the appropriate listener(s)
  3. Build and deploy agent(s) to the target(s)
  4. Wait for check-in confirmation

Seeding Test Files
#

To populate targets with sample files for testing file transfer and collection modules:

1
python3 scripts/seed_files.py

This creates test files across the target containers for use with file browser and download operations.

Manual Walkthrough
#

If you prefer to set things up by hand, here is the general flow after SSHing into the CLI container.

Connect and Authenticate
#

1
2
3
tantoc2> connect https://server:8443
tantoc2> login admin
Password: tantoc2

Create an Engagement
#

1
2
3
tantoc2> engagements create testing-lab
Engagement passphrase: testpass
tantoc2> engagements use <engagement-id>

Start a Listener
#

1
2
tantoc2[testing-]> listeners create --protocol tcp --bind-port 4444
tantoc2[testing-]> listeners list

Build and Deploy an Agent
#

1
2
tantoc2[testing-]> agents build --listener <listener-id> --os linux --arch amd64
tantoc2[testing-]> agents list-builds

Copy the built agent to target1 and execute it:

1
2
3
# From another terminal or via SSH:
sshpass -p target ssh [email protected]
# Transfer and run the agent binary on the target

Interact with the Agent
#

1
2
3
4
5
tantoc2[testing-]> agents list
tantoc2[testing-]> agents use <agent-id>
tantoc2[testing-][agent]> shell whoami
tantoc2[testing-][agent]> shell hostname
tantoc2[testing-][agent]> shell ip addr

Set Up a P2P Relay
#

With an active agent on target1:

1
tantoc2[testing-][agent]> relay start --bind-port 5555

Then deploy a second agent on target2, configured to call back to target1 at 172.21.0.10:5555.

Credentials Reference
#

ServiceUsernamePasswordAccess
Teamserver APIadmintantoc2https://server:8443
CLI SSHroottantoc2ssh -p 2222 root@localhost
Web UIadmintantoc2http://localhost:8080
Target hostsroottargetSSH to any target container

Reset and Cleanup
#

Full Reset
#

Wipe everything and start fresh:

1
./reset.sh

This tears down all containers, removes volumes, and restarts the entire stack.

Manual Teardown
#

1
docker compose down -v

Rebuild After Code Changes
#

If you have modified TantoC2 source code and need to rebuild:

1
2
./build.sh        # Rebuild wheels
docker compose up -d --build   # Recreate containers with new images

Scripts Reference
#

ScriptPurpose
start.shOne-command setup: build, start, SSH in, teardown on exit
build.shBuild all Python wheels (main project + plugins)
reset.shWipe all containers/volumes and restart
scripts/setup.pyAutomated engagement/listener/agent setup per scenario
scripts/seed_files.pyCreate test files on target containers
topology.htmlLocal network topology diagram (serve with serve_topology.py)