This project currently has no tests. Please add the capability to run tests including an initial simple set of test cases including verifying that environment variable substitution in the config file works correctly.
1→[project]
2→name = "machine"
3→version = "1.0.0"
4→description = "Utility for creating and managing VMs"
5→readme = "README.md"
6→license = "AGPL-3.0-only"
7→requires-python = ">=3.8"
8→authors = [
9→ { name = "Stirlingbridge", email = "info@stirlingbridge.website" },
10→]
11→classifiers = [
12→ "Programming Language :: Python :: 3.8",
13→ "Operating System :: OS Independent",
14→]
15→dependencies = [
16→ "click==8.1.7",
17→ "python-digitalocean==1.17.0",
18→ "ruamel.yaml>=0.17.32",
19→ "requests==2.32.3",
20→ "expandvars==1.0.0",
21→]
22→
23→[project.scripts]
24→machine = "machine.main:main"
25→
26→[project.urls]
27→Homepage = "https://github.com/stirlingbridge/machine"
28→
29→[build-system]
30→requires = ["hatchling"]
31→build-backend = "hatchling.build"
32→
33→[dependency-groups]
34→dev = [
35→ "flake8",
36→ "black",
37→]
38→ 1→import os
2→import re
3→from pathlib import Path
4→from machine.di import d
5→from machine.factory import yaml
6→from machine.log import fatal_error, debug
7→from machine.types import Config, MachineConfig
8→from machine import constants
9→
10→_env_var_pattern = re.compile(r"\$\{([^}]+)\}")
11→
12→
13→def _expand_env_vars(value):
14→ if isinstance(value, str):
15→
16→ def _replace(match):
17→ expr = match.group(1)
18→ if ":-" in expr:
19→ var_name, default = expr.split(":-", 1)
20→ return os.environ.get(var_name, default)
21→ else:
22→ if expr not in os.environ:
23→ fatal_error(f"Environment variable '{expr}' referenced in config is not set")
24→ return os.environ[expr]
25→
26→ return _env_var_pattern.sub(_replace, value)
27→ elif isinstance(value, dict):
28→ return {k: _expand_env_vars(v) for k, v in value.items()}
29→ elif isinstance(value, list):
30→ return [_expand_env_vars(item) for item in value]
31→ return value
32→
33→
34→class _loaded_config:
35→ c: any = None
36→
37→
38→def _load_config_data(config_file_name: str):
39→ if not config_file_name:
40→ config_file_name = constants.default_config_file_path
41→ config_path = Path(os.path.expanduser(config_file_name))
42→ if not config_path.exists():
43→ fatal_error(f"Error: Config file: {config_path} not found")
44→ config = _expand_env_vars(yaml().load(open(config_path, "r")))
45→ if d.opt.debug:
46→ debug(f"Loaded config file: {config_path}")
47→ debug(f"Parsed config: {config}")
48→ _loaded_config.c = config
49→ return config
50→
51→
52→def get(config_file_name: str) -> Config:
53→ config = _load_config_data(config_file_name)
54→ config_do = config["digital-ocean"]
55→ return Config(
56→ config_do["access-token"],
57→ config_do["ssh-key"],
58→ config_do.get("dns-zone"),
59→ config_do["machine-size"],
60→ config_do["image"],
61→ config_do["region"],
62→ config_do["project"],
63→ )
64→
65→
66→def get_machine(name: str) -> MachineConfig:
67→ if not _loaded_config.c:
68→ fatal_error("Attempt to fetch machine data before config loaded")
69→ config = _loaded_config.c
70→ config_machines = config["machines"]
71→ target_config = config_machines[name]
72→ return MachineConfig(
73→ target_config["new-user-name"],
74→ target_config.get("script-url"),
75→ target_config.get("script-dir"),
76→ target_config.get("script-path"),
77→ target_config.get("script-args"),
78→ )
79→
80→
81→def get_machines():
82→ if not _loaded_config.c:
83→ fatal_error("Attempt to fetch machine data before config loaded")
84→ config = _loaded_config.c
85→
86→ ret = {}
87→ for name in config["machines"]:
88→ ret[name] = get_machine(name)
89→ return ret
90→__init__.py __pycache__ cloud_config.py config.py constants.py defaults.py di.py factory.py log.py main.py subcommands types.py util.py
1→import sys
2→
3→
4→def fatal_error(s: str):
5→ print(s, file=sys.stderr)
6→ sys.exit(1)
7→
8→
9→def debug(s: str):
10→ print(s, file=sys.stderr)
11→
12→
13→def info(s: str):
14→ print(s, file=sys.stderr)
15→
16→
17→def output(s: str):
18→ print(s)
19→ 1→from dataclasses import dataclass
2→
3→TAG_MACHINE_CREATED = "machine:created"
4→TAG_MACHINE_TYPE_PREFIX = "machine:type:"
5→TAG_MACHINE_SESSION_PREFIX = "machine:session:"
6→
7→
8→@dataclass
9→class CliOptions:
10→ debug: bool
11→ quiet: bool
12→ verbose: bool
13→ dry_run: bool
14→
15→
16→@dataclass
17→class Config:
18→ access_token: str
19→ ssh_key: str
20→ dns_zone: str
21→ machine_size: str
22→ image: str
23→ region: str
24→ project: str
25→
26→
27→@dataclass
28→class MainCmdCtx:
29→ config: Config
30→ session_id: str
31→
32→
33→@dataclass
34→class MachineConfig:
35→ new_user_name: str
36→ script_url: str
37→ script_dir: str
38→ script_path: str
39→ script_args: str
40→Now I have a good understanding of the codebase. Let me add pytest as a dev dependency and create the test infrastructure.
dev = [
"flake8",
"black",
]dev = [
"flake8",
"black",
"pytest",
]import os
import pytest
from unittest.mock import patch
from machine.config import _expand_env_vars
class TestExpandEnvVars:
def test_plain_string_unchanged(self):
assert _expand_env_vars("hello world") == "hello world"
def test_non_string_types_unchanged(self):
assert _expand_env_vars(42) == 42
assert _expand_env_vars(3.14) == 3.14
assert _expand_env_vars(True) is True
assert _expand_env_vars(None) is None
def test_simple_variable_substitution(self):
with patch.dict(os.environ, {"MY_VAR": "my_value"}):
assert _expand_env_vars("${MY_VAR}") == "my_value"
def test_variable_embedded_in_string(self):
with patch.dict(os.environ, {"HOST": "example.com"}):
assert _expand_env_vars("https://${HOST}/api") == "https://example.com/api"
def test_multiple_variables_in_string(self):
with patch.dict(os.environ, {"HOST": "example.com", "PORT": "8080"}):
assert _expand_env_vars("${HOST}:${PORT}") == "example.com:8080"
def test_default_value_when_var_unset(self):
env = os.environ.copy()
env.pop("UNSET_VAR", None)
with patch.dict(os.environ, env, clear=True):
assert _expand_env_vars("${UNSET_VAR:-fallback}") == "fallback"
def test_default_value_ignored_when_var_set(self):
with patch.dict(os.environ, {"MY_VAR": "actual"}):
assert _expand_env_vars("${MY_VAR:-fallback}") == "actual"
def test_default_value_empty_string(self):
env = os.environ.copy()
env.pop("UNSET_VAR", None)
with patch.dict(os.environ, env, clear=True):
assert _expand_env_vars("${UNSET_VAR:-}") == ""
def test_unset_variable_without_default_exits(self):
env = os.environ.copy()
env.pop("MISSING_VAR", None)
with patch.dict(os.environ, env, clear=True):
with pytest.raises(SystemExit):
_expand_env_vars("${MISSING_VAR}")
def test_dict_values_expanded(self):
with patch.dict(os.environ, {"TOKEN": "secret123"}):
data = {"key": "${TOKEN}", "plain": "no-change"}
result = _expand_env_vars(data)
assert result == {"key": "secret123", "plain": "no-change"}
def test_nested_dict_expanded(self):
with patch.dict(os.environ, {"VAL": "deep"}):
data = {"outer": {"inner": "${VAL}"}}
result = _expand_env_vars(data)
assert result == {"outer": {"inner": "deep"}}
def test_list_values_expanded(self):
with patch.dict(os.environ, {"A": "x", "B": "y"}):
data = ["${A}", "literal", "${B}"]
result = _expand_env_vars(data)
assert result == ["x", "literal", "y"]
def test_mixed_nested_structure(self):
with patch.dict(os.environ, {"V": "replaced"}):
data = {"list": ["${V}", "fixed"], "nested": {"k": "${V}"}}
result = _expand_env_vars(data)
assert result == {"list": ["replaced", "fixed"], "nested": {"k": "replaced"}}
+ iniconfig==2.3.0 ~ machine==1.0.0 (from file:///home/david/projects/bpi/machine) + pluggy==1.6.0 + pygments==2.19.2 + pytest==9.0.2
============================= test session starts ============================== platform linux -- Python 3.12.3, pytest-9.0.2, pluggy-1.6.0 -- /home/david/projects/bpi/machine/.venv/bin/python3 cachedir: .pytest_cache rootdir: /home/david/projects/bpi/machine configfile: pyproject.toml collecting ... collected 13 items tests/test_config.py::TestExpandEnvVars::test_plain_string_unchanged PASSED [ 7%] tests/test_config.py::TestExpandEnvVars::test_non_string_types_unchanged PASSED [ 15%] tests/test_config.py::TestExpandEnvVars::test_simple_variable_substitution PASSED [ 23%] tests/test_config.py::TestExpandEnvVars::test_variable_embedded_in_string PASSED [ 30%] tests/test_config.py::TestExpandEnvVars::test_multiple_variables_in_string PASSED [ 38%] tests/test_config.py::TestExpandEnvVars::test_default_value_when_var_unset PASSED [ 46%] tests/test_config.py::TestExpandEnvVars::test_default_value_ignored_when_var_set PASSED [ 53%] tests/test_config.py::TestExpandEnvVars::test_default_value_empty_string PASSED [ 61%] tests/test_config.py::TestExpandEnvVars::test_unset_variable_without_default_exits PASSED [ 69%] tests/test_config.py::TestExpandEnvVars::test_dict_values_expanded PASSED [ 76%] tests/test_config.py::TestExpandEnvVars::test_nested_dict_expanded PASSED [ 84%] tests/test_config.py::TestExpandEnvVars::test_list_values_expanded PASSED [ 92%] tests/test_config.py::TestExpandEnvVars::test_mixed_nested_structure PASSED [100%] ============================== 13 passed in 0.04s ==============================
All 13 tests pass. Now let me add a make test target and a test script for consistency with the existing project structure.
1→# Setup dev environment
2→dev:
3→ uv sync
4→
5→build:
6→ ./sh/build-package.sh
7→
8→lint:
9→ uv run flake8
10→lint: uv run flake8
test: uv run pytest tests/ -v lint: uv run flake8
Everything is clean. Here's what was added:
Test infrastructure:
- Added pytest to dev dependencies in pyproject.toml
- Created tests/ directory with tests/test_config.py
- Added make test target (runs uv run pytest tests/ -v)
Test cases (13 tests covering _expand_env_vars):
- Plain strings and non-string types pass through unchanged
- Simple ${VAR} substitution
- Variables embedded in strings and multiple variables per string
- ${VAR:-default} syntax: uses default when unset, actual value when set, supports empty default
- Unset variable without default calls sys.exit(1)
- Recursive expansion through dicts, lists, and nested structures
Run tests with make test or uv run pytest tests/ -v.
Thank you. Can you also please add a new GitHub action job that runs the tests on PR creation and merge to main?
{
"file_path": "/home/david/projects/bpi/machine/.github/workflows/build-release.yml"
} 1→name: Build and Release
2→
3→on:
4→ push:
5→ branches:
6→ - main
7→ - build-test
8→
9→jobs:
10→ publish:
11→ name: "Build and Release"
12→ runs-on: ubuntu-latest
13→ steps:
14→ - name: "Clone project repository"
15→ uses: actions/checkout@v4
16→ - name: "Install uv"
17→ uses: astral-sh/setup-uv@v4
18→ - name: "Install Python"
19→ run: uv python install 3.13
20→ - name: "Get release tag"
21→ id: release-tag
22→ run: |
23→ release_tag=$(./sh/make-release-tag.sh)
24→ echo "release-tag=v${release_tag}" >> $GITHUB_OUTPUT
25→ - name: "Build shiv package"
26→ run: |
27→ ./sh/build-package.sh
28→ - name: "Upload package as a github release"
29→ uses: softprops/action-gh-release@v2
30→ with:
31→ tag_name: ${{ steps.release-tag.outputs.release-tag }}
32→ draft: ${{ endsWith('build-test', github.ref ) }}
33→ files: ./build/machine
34→name: Test
on:
push:
branches:
- main
pull_request:
branches:
- main
jobs:
test:
name: "Run Tests"
runs-on: ubuntu-latest
steps:
- name: "Clone project repository"
uses: actions/checkout@v4
- name: "Install uv"
uses: astral-sh/setup-uv@v4
- name: "Install Python"
run: uv python install 3.13
- name: "Install dependencies"
run: uv sync
- name: "Run tests"
run: uv run pytest tests/ -v
Created .github/workflows/test.yml. It triggers on pushes to main and on pull requests targeting main, using the same uv/Python setup pattern as the existing build workflow.
I see that the current tests are the kind of test I'd call "unit tests" -- the test is performed by running a Python process that executes the test harness, which in turn links in the target code to be tested. Each test therefore ends up consisting of some call into a function in the target code from that test Python process. This is ok but I'd like you to also look into how you could in addition have tests for the "actual product". For example to test that environment variable expansion works such a test would run the built machine tool just as a user would. It would set one or more environment variables in the shell environment. It would create or use a pre-existing test config yaml file that has some environment variable substitutions and it would verify that the behavior of the tool when run is consistent with those environment variables having been expanded correctly. Hopefully you see the qualatitive difference betwen such a test and the regular unit type tests.
Yes, I understand the distinction clearly — integration/end-to-end tests that exercise the actual built artifact as a subprocess, the same way a user would invoke it. Let me explore what's feasible.
1→import os
2→
3→import click
4→
5→from machine import config
6→from machine import constants
7→from machine.di import d
8→from machine.log import output
9→from machine.subcommands import create, destroy, list, projects, ssh_keys, domains, list_domain, types, status
10→from machine.types import CliOptions, MainCmdCtx
11→from machine.util import load_session_id
12→
13→CLICK_CONTEXT_SETTINGS = dict(help_option_names=["-h", "--help"])
14→
15→
16→@click.group(context_settings=CLICK_CONTEXT_SETTINGS)
17→@click.option("--debug", is_flag=True, default=False, help="Enable debug output")
18→@click.option("--quiet", is_flag=True, default=False, help="Suppress all non-essential output")
19→@click.option("--verbose", is_flag=True, default=False, help="Enable verbose output")
20→@click.option("--dry-run", is_flag=True, default=False, help="Run but do not do anything")
21→@click.option("--config-file", metavar="<PATH>", help=f"Specify the config file (default {constants.default_config_file_path})")
22→@click.option(
23→ "--session-id", metavar="<ID>", default=load_session_id(), help=f"Override the default session ID (default {load_session_id()})"
24→)
25→@click.pass_context
26→def main(context, debug, quiet, verbose, dry_run, config_file, session_id):
27→ options = CliOptions(debug, quiet, verbose, dry_run)
28→ d.opt = options
29→ # Skip config loading for version subcommand since it doesn't need it
30→ # and should work even when no config file exists (#25)
31→ if context.invoked_subcommand == "version":
32→ return
33→ main_context = MainCmdCtx(config.get(config_file), session_id)
34→ context.obj = main_context
35→
36→
37→@main.command()
38→@click.pass_context
39→def version(context):
40→ try:
41→ version_file = os.path.join(os.path.dirname(__file__), "version.txt")
42→ with open(version_file) as f:
43→ version_string = f.read().strip()
44→ except FileNotFoundError:
45→ version_string = "dev"
46→ output(version_string)
47→
48→
49→main.add_command(create.command, "create")
50→main.add_command(destroy.command, "destroy")
51→main.add_command(domains.command, "domains")
52→main.add_command(list.command, "list")
53→main.add_command(list_domain.command, "list-domain")
54→main.add_command(projects.command, "projects")
55→main.add_command(ssh_keys.command, "ssh-keys")
56→main.add_command(types.command, "types")
57→main.add_command(status.command, "status")
58→{
"subagent_type": "Explore",
"prompt": "Quickly explore the machine/subcommands/ directory. Read all subcommand files to understand what CLI commands exist, what arguments they take, and which ones load/use the config file. I need to understand which commands could be exercised in a test without actually talking to DigitalOcean."
}[
{
"type": "text",
"text": "Perfect! Now I have a comprehensive understanding of the CLI structure. Let me provide a summary:\n\n## Summary: Machine CLI Commands & Configuration Usage\n\n### Commands Overview\n\nThe machine CLI has **9 main commands** (plus a `version` command):\n\n| Command | Uses Config | Arguments | Key Features |\n|---------|------------|-----------|--------------|\n| **create** | YES | `--name` (required), `--type` (required when --initialize), options for region, machine-size, image, tag, DNS, wait-for-ip | Creates DigitalOcean droplets; requires config for access_token, ssh_key; uses machine type definitions from config |\n| **destroy** | YES | Droplet IDs (variadic); options for confirm, delete-dns, --all | Destroys droplets; reads DNS zone from config |\n| **list** | YES | Filtering: --id, --name, --tag, --type, --region; options for --all, --quiet, --output format, --unique | Lists droplets; filters by session tags; needs config for access_token |\n| **status** | YES | Filtering: --id, --name, --tag, --type, --region; options for --all, --quiet, --output format, --status-check | Gets droplet status; makes HTTP requests to droplets at port 4242; needs config for access_token |\n| **list-domain** | YES | Zone (optional), filtering: --name, --type; options for --all, --quiet, --output format | Lists DNS records; defaults to dns_zone from config if not specified |\n| **domains** | YES | No arguments | Lists all DigitalOcean domains; purely informational |\n| **projects** | YES | No arguments | Lists all DigitalOcean projects; purely informational |\n| **ssh-keys** | YES | No arguments | Lists SSH keys in DigitalOcean; purely informational |\n| **types** | NO | No arguments | Lists machine types defined in config file; **does NOT require config to load** (only reads machines section) |\n| **version** | NO | No arguments | Prints version; **explicitly skips config loading** |\n\n### Config Dependencies\n\n**Commands that DON'T require config:**\n- `version` - explicitly skips config loading (line 31-32 in main.py)\n- `types` - only reads the `machines` section from config (doesn't need digital-ocean settings)\n\n**Commands that DO require config:**\n- All others need `digital-ocean.access-token` to instantiate DigitalOcean Manager\n- `create`, `destroy`, `list-domain` also read `digital-ocean.dns-zone`\n- `create` additionally needs machine type definitions from the config\n\n### Testable Commands (Without DigitalOcean API)\n\nCommands that could theoretically be exercised in tests without hitting DigitalOcean:\n\n1. **`version`** - Pure local operation, no API calls\n2. **`types`** - Only reads config file, no API calls\n3. **Any command with `--help` flag** - Just displays usage\n\nCommands that could be partially mocked:\n- **`list`** - Could mock the Manager to return test droplets\n- **`status`** - Could mock the Manager + HTTP requests\n- **`list-domain`** - Could mock the Domain API calls\n- **`domains`, `projects`, `ssh-keys`** - Could mock the Manager API calls\n\nCommands that are harder to test:\n- **`create`, `destroy`** - These actually modify cloud resources\n\n### File Paths\n\n- Main CLI entry: `/home/david/projects/bpi/machine/machine/main.py`\n- Subcommands directory: `/home/david/projects/bpi/machine/machine/subcommands/`\n- Config loading: `/home/david/projects/bpi/machine/machine/config.py`"
},
{
"type": "text",
"text": "agentId: a8d915ec5a3ea5973 (for resuming to continue this agent's work if needed)\n<usage>total_tokens: 25104\ntool_uses: 15\nduration_ms: 23884</usage>"
}
]Good findings. The types command reads config but makes no API calls — perfect for testing env var expansion end-to-end. Let me look at it.
1→import click
2→
3→from machine.log import output
4→from machine.config import get_machines
5→
6→
7→@click.command(help="List projects")
8→@click.pass_context
9→def command(context):
10→ config_machines = get_machines()
11→ names = list(config_machines.keys())
12→ names.sort()
13→ for n in names:
14→ output(n)
15→ 1→from pathlib import Path
2→
3→default_config_dir_path = str(Path.home()) + "/.machine"
4→default_config_file_path = default_config_dir_path + "/config.yml"
5→default_session_id_file_path = default_config_dir_path + "/session-id.yml"
6→Now I understand the picture. With --debug, the CLI prints the parsed config to stderr, so I can verify expansion happened. And types only needs the config file, no API calls. Let me create the integration tests.
"""Integration tests that run the machine CLI as a subprocess, the same way a user would."""
import os
import subprocess
import tempfile
import textwrap
import pytest
def run_machine(*args, env_override=None):
"""Run the machine CLI as a subprocess and return the result."""
env = os.environ.copy()
if env_override:
env.update(env_override)
result = subprocess.run(
["uv", "run", "machine"] + list(args),
capture_output=True,
text=True,
env=env,
)
return result
def write_config(path, content):
with open(path, "w") as f:
f.write(textwrap.dedent(content))
class TestVersionCommand:
"""Smoke test: the CLI runs and the version command works without any config."""
def test_version_runs(self):
result = run_machine("version")
assert result.returncode == 0
assert result.stdout.strip() # prints something
def test_help_runs(self):
result = run_machine("--help")
assert result.returncode == 0
assert "Usage" in result.stdout
class TestEnvVarExpansionIntegration:
"""End-to-end tests that verify environment variable expansion works
when the actual machine tool is invoked with a config file."""
@pytest.fixture()
def config_dir(self, tmp_path):
return tmp_path
def test_env_var_expanded_in_config(self, config_dir):
config_file = config_dir / "config.yml"
write_config(
config_file,
"""\
digital-ocean:
access-token: "${TEST_DO_TOKEN}"
ssh-key: test-key
machine-size: s-1vcpu-1gb
image: ubuntu-22-04-x64
region: nyc1
project: test-project
machines:
test-machine:
new-user-name: testuser
""",
)
result = run_machine(
"--debug",
"--config-file",
str(config_file),
"types",
env_override={"TEST_DO_TOKEN": "tok_secret_12345"},
)
assert result.returncode == 0
# The debug output on stderr should contain the expanded token value
assert "tok_secret_12345" in result.stderr
# And should NOT contain the unexpanded variable reference
assert "${TEST_DO_TOKEN}" not in result.stderr
# The types command should list the machine name on stdout
assert "test-machine" in result.stdout
def test_multiple_env_vars_expanded(self, config_dir):
config_file = config_dir / "config.yml"
write_config(
config_file,
"""\
digital-ocean:
access-token: "${TEST_TOKEN}"
ssh-key: "${TEST_SSH_KEY}"
machine-size: s-1vcpu-1gb
image: ubuntu-22-04-x64
region: nyc1
project: test-project
machines:
dev:
new-user-name: devuser
""",
)
result = run_machine(
"--debug",
"--config-file",
str(config_file),
"types",
env_override={"TEST_TOKEN": "expanded_token", "TEST_SSH_KEY": "expanded_key"},
)
assert result.returncode == 0
assert "expanded_token" in result.stderr
assert "expanded_key" in result.stderr
def test_env_var_with_default_uses_default_when_unset(self, config_dir):
config_file = config_dir / "config.yml"
write_config(
config_file,
"""\
digital-ocean:
access-token: fake-token
ssh-key: test-key
machine-size: "${TEST_SIZE:-s-2vcpu-4gb}"
image: ubuntu-22-04-x64
region: nyc1
project: test-project
machines:
myvm:
new-user-name: admin
""",
)
# Make sure TEST_SIZE is not in the environment
env = {"TEST_SIZE": None}
clean_env = os.environ.copy()
clean_env.pop("TEST_SIZE", None)
result = run_machine(
"--debug",
"--config-file",
str(config_file),
"types",
env_override={},
)
# Run with TEST_SIZE explicitly removed
result = subprocess.run(
["uv", "run", "machine", "--debug", "--config-file", str(config_file), "types"],
capture_output=True,
text=True,
env=clean_env,
)
assert result.returncode == 0
assert "s-2vcpu-4gb" in result.stderr
def test_env_var_with_default_uses_value_when_set(self, config_dir):
config_file = config_dir / "config.yml"
write_config(
config_file,
"""\
digital-ocean:
access-token: fake-token
ssh-key: test-key
machine-size: "${TEST_SIZE:-s-2vcpu-4gb}"
image: ubuntu-22-04-x64
region: nyc1
project: test-project
machines:
myvm:
new-user-name: admin
""",
)
result = run_machine(
"--debug",
"--config-file",
str(config_file),
"types",
env_override={"TEST_SIZE": "s-4vcpu-8gb"},
)
assert result.returncode == 0
assert "s-4vcpu-8gb" in result.stderr
assert "s-2vcpu-4gb" not in result.stderr
def test_missing_env_var_without_default_exits_with_error(self, config_dir):
config_file = config_dir / "config.yml"
write_config(
config_file,
"""\
digital-ocean:
access-token: "${DEFINITELY_NOT_SET_VAR}"
ssh-key: test-key
machine-size: s-1vcpu-1gb
image: ubuntu-22-04-x64
region: nyc1
project: test-project
machines:
myvm:
new-user-name: admin
""",
)
clean_env = os.environ.copy()
clean_env.pop("DEFINITELY_NOT_SET_VAR", None)
result = subprocess.run(
["uv", "run", "machine", "--config-file", str(config_file), "types"],
capture_output=True,
text=True,
env=clean_env,
)
assert result.returncode != 0
assert "DEFINITELY_NOT_SET_VAR" in result.stderr
def test_env_var_in_machine_config_section(self, config_dir):
config_file = config_dir / "config.yml"
write_config(
config_file,
"""\
digital-ocean:
access-token: fake-token
ssh-key: test-key
machine-size: s-1vcpu-1gb
image: ubuntu-22-04-x64
region: nyc1
project: test-project
machines:
web-server:
new-user-name: "${TEST_USERNAME}"
db-server:
new-user-name: admin
""",
)
result = run_machine(
"--debug",
"--config-file",
str(config_file),
"types",
env_override={"TEST_USERNAME": "deploy_user"},
)
assert result.returncode == 0
assert "deploy_user" in result.stderr
# Both machine types should be listed
assert "db-server" in result.stdout
assert "web-server" in result.stdout
File created successfully at: /home/david/projects/bpi/machine/tests/test_cli_integration.py
============================= test session starts ============================== platform linux -- Python 3.12.3, pytest-9.0.2, pluggy-1.6.0 -- /home/david/projects/bpi/machine/.venv/bin/python3 cachedir: .pytest_cache rootdir: /home/david/projects/bpi/machine configfile: pyproject.toml collecting ... collected 8 items tests/test_cli_integration.py::TestVersionCommand::test_version_runs PASSED [ 12%] tests/test_cli_integration.py::TestVersionCommand::test_help_runs PASSED [ 25%] tests/test_cli_integration.py::TestEnvVarExpansionIntegration::test_env_var_expanded_in_config PASSED [ 37%] tests/test_cli_integration.py::TestEnvVarExpansionIntegration::test_multiple_env_vars_expanded PASSED [ 50%] tests/test_cli_integration.py::TestEnvVarExpansionIntegration::test_env_var_with_default_uses_default_when_unset PASSED [ 62%] tests/test_cli_integration.py::TestEnvVarExpansionIntegration::test_env_var_with_default_uses_value_when_set PASSED [ 75%] tests/test_cli_integration.py::TestEnvVarExpansionIntegration::test_missing_env_var_without_default_exits_with_error PASSED [ 87%] tests/test_cli_integration.py::TestEnvVarExpansionIntegration::test_env_var_in_machine_config_section PASSED [100%] ============================== 8 passed in 0.96s ===============================
Exit code 1 ============================= test session starts ============================== platform linux -- Python 3.12.3, pytest-9.0.2, pluggy-1.6.0 -- /home/david/projects/bpi/machine/.venv/bin/python3 cachedir: .pytest_cache rootdir: /home/david/projects/bpi/machine configfile: pyproject.toml collecting ... collected 21 items tests/test_cli_integration.py::TestVersionCommand::test_version_runs PASSED [ 4%] tests/test_cli_integration.py::TestVersionCommand::test_help_runs PASSED [ 9%] tests/test_cli_integration.py::TestEnvVarExpansionIntegration::test_env_var_expanded_in_config PASSED [ 14%] tests/test_cli_integration.py::TestEnvVarExpansionIntegration::test_multiple_env_vars_expanded PASSED [ 19%] tests/test_cli_integration.py::TestEnvVarExpansionIntegration::test_env_var_with_default_uses_default_when_unset PASSED [ 23%] tests/test_cli_integration.py::TestEnvVarExpansionIntegration::test_env_var_with_default_uses_value_when_set PASSED [ 28%] tests/test_cli_integration.py::TestEnvVarExpansionIntegration::test_missing_env_var_without_default_exits_with_error PASSED [ 33%] tests/test_cli_integration.py::TestEnvVarExpansionIntegration::test_env_var_in_machine_config_section PASSED [ 38%] tests/test_config.py::TestExpandEnvVars::test_plain_string_unchanged PASSED [ 42%] tests/test_config.py::TestExpandEnvVars::test_non_string_types_unchanged PASSED [ 47%] tests/test_config.py::TestExpandEnvVars::test_simple_variable_substitution PASSED [ 52%] tests/test_config.py::TestExpandEnvVars::test_variable_embedded_in_string PASSED [ 57%] tests/test_config.py::TestExpandEnvVars::test_multiple_variables_in_string PASSED [ 61%] tests/test_config.py::TestExpandEnvVars::test_default_value_when_var_unset PASSED [ 66%] tests/test_config.py::TestExpandEnvVars::test_default_value_ignored_when_var_set PASSED [ 71%] tests/test_config.py::TestExpandEnvVars::test_default_value_empty_string PASSED [ 76%] tests/test_config.py::TestExpandEnvVars::test_unset_variable_without_default_exits PASSED [ 80%] tests/test_config.py::TestExpandEnvVars::test_dict_values_expanded PASSED [ 85%] tests/test_config.py::TestExpandEnvVars::test_nested_dict_expanded PASSED [ 90%] tests/test_config.py::TestExpandEnvVars::test_list_values_expanded PASSED [ 95%] tests/test_config.py::TestExpandEnvVars::test_mixed_nested_structure PASSED [100%] ============================== 21 passed in 0.94s ============================== tests/test_cli_integration.py:5:1: F401 'tempfile' imported but unused tests/test_cli_integration.py:129:9: F841 local variable 'env' is assigned to but never used
import os import subprocess import tempfile import textwrap import pytest
import os import subprocess import textwrap import pytest
The file /home/david/projects/bpi/machine/tests/test_cli_integration.py has been updated successfully.
# Make sure TEST_SIZE is not in the environment
env = {"TEST_SIZE": None}
clean_env = os.environ.copy() # Make sure TEST_SIZE is not in the environment
clean_env = os.environ.copy()The file /home/david/projects/bpi/machine/tests/test_cli_integration.py has been updated successfully.
uv run pytest tests/ -v 2>&1 && uv run flake8 tests/ 2>&1 && echo "ALL CLEAN"
============================= test session starts ============================== platform linux -- Python 3.12.3, pytest-9.0.2, pluggy-1.6.0 -- /home/david/projects/bpi/machine/.venv/bin/python3 cachedir: .pytest_cache rootdir: /home/david/projects/bpi/machine configfile: pyproject.toml collecting ... collected 21 items tests/test_cli_integration.py::TestVersionCommand::test_version_runs PASSED [ 4%] tests/test_cli_integration.py::TestVersionCommand::test_help_runs PASSED [ 9%] tests/test_cli_integration.py::TestEnvVarExpansionIntegration::test_env_var_expanded_in_config PASSED [ 14%] tests/test_cli_integration.py::TestEnvVarExpansionIntegration::test_multiple_env_vars_expanded PASSED [ 19%] tests/test_cli_integration.py::TestEnvVarExpansionIntegration::test_env_var_with_default_uses_default_when_unset PASSED [ 23%] tests/test_cli_integration.py::TestEnvVarExpansionIntegration::test_env_var_with_default_uses_value_when_set PASSED [ 28%] tests/test_cli_integration.py::TestEnvVarExpansionIntegration::test_missing_env_var_without_default_exits_with_error PASSED [ 33%] tests/test_cli_integration.py::TestEnvVarExpansionIntegration::test_env_var_in_machine_config_section PASSED [ 38%] tests/test_config.py::TestExpandEnvVars::test_plain_string_unchanged PASSED [ 42%] tests/test_config.py::TestExpandEnvVars::test_non_string_types_unchanged PASSED [ 47%] tests/test_config.py::TestExpandEnvVars::test_simple_variable_substitution PASSED [ 52%] tests/test_config.py::TestExpandEnvVars::test_variable_embedded_in_string PASSED [ 57%] tests/test_config.py::TestExpandEnvVars::test_multiple_variables_in_string PASSED [ 61%] tests/test_config.py::TestExpandEnvVars::test_default_value_when_var_unset PASSED [ 66%] tests/test_config.py::TestExpandEnvVars::test_default_value_ignored_when_var_set PASSED [ 71%] tests/test_config.py::TestExpandEnvVars::test_default_value_empty_string PASSED [ 76%] tests/test_config.py::TestExpandEnvVars::test_unset_variable_without_default_exits PASSED [ 80%] tests/test_config.py::TestExpandEnvVars::test_dict_values_expanded PASSED [ 85%] tests/test_config.py::TestExpandEnvVars::test_nested_dict_expanded PASSED [ 90%] tests/test_config.py::TestExpandEnvVars::test_list_values_expanded PASSED [ 95%] tests/test_config.py::TestExpandEnvVars::test_mixed_nested_structure PASSED [100%] ============================== 21 passed in 0.94s ============================== ALL CLEAN
All 21 tests pass and lint is clean.
Here's what was added in tests/test_cli_integration.py:
Approach: Each test runs uv run machine as a subprocess — the same way a user would. Tests create temporary config YAML files with ${VAR} references, set environment variables in the subprocess environment, and then verify the tool's actual output.
The key trick: The types subcommand loads and parses the config (including env var expansion) but makes no DigitalOcean API calls. Combined with --debug, which prints the parsed config to stderr, this lets us verify that expansion actually happened in the real tool.
Integration test cases (6 env var tests + 2 smoke tests):
- test_version_runs / test_help_runs — smoke tests that the CLI launches
- test_env_var_expanded_in_config — ${TEST_DO_TOKEN} is replaced with the env var value
- test_multiple_env_vars_expanded — multiple vars in different config fields
- test_env_var_with_default_uses_default_when_unset — ${VAR:-default} falls back
- test_env_var_with_default_uses_value_when_set — ${VAR:-default} uses the set value
- test_missing_env_var_without_default_exits_with_error — tool exits non-zero with error message
- test_env_var_in_machine_config_section — expansion works in the machines: section too
Thanks! Glad the approach worked out well. The two test layers — unit tests for fast, focused validation and integration tests for real end-to-end confidence — complement each other nicely.