Driftless
Warning: This is an experimental, AI-assisted project. Bugs and other shortcomings are expected and should be reported as GitHub Issues. PRs are welcome!
A lightweight Rust agent for declarative system configuration, metrics gathering, and log forwarding via GitOps.
Features
- Idempotent Configuration: Define desired system state with YAML/JSON/TOML configurations
- Multi-Platform: Supports Linux systems with various package managers
- Advanced Template System: Ansible-like Jinja2 templating with variables, filters, and built-in functions
- Three Distinct Operation Types:
- Configuration Operations: Define and enforce desired system state (like Ansible tasks)
- Facts Collectors: Gather system metrics, inventory, and monitoring data
- Log Sources/Outputs: Collect, process, and forward log data
- Agent Mode: Continuous monitoring and configuration drift detection
- Rich Documentation: Comprehensive operation references with examples in YAML, JSON, and TOML
Installation
cargo install driftless
Quick Start
User Installation (Per-User Configuration)
- Create a configuration directory:
mkdir -p ~/.config/driftless/config
- Create an apply configuration:
# ~/.config/driftless/config/apply.yml
tasks:
- type: package
name: nginx
state: present
- type: service
name: nginx
state: started
enabled: true
- type: file
path: /etc/nginx/sites-available/default
state: present
content: |
server {
listen 80;
server_name _;
root /var/www/html;
index index.html;
}
mode: "0644"
- Apply the configuration:
driftless apply
Documentation
Driftless provides comprehensive documentation:
- User Guide - Getting started, configuration examples, and agent mode
- Reference - Complete operation references, facts, logs, and templates
- Developer Guide - Development, plugins, and contributing
CLI Commands
Driftless provides several CLI commands for different purposes:
# Configuration Operations
driftless apply # Apply configuration operations
driftless apply --dry-run # Preview changes without applying
# Facts Collection
driftless facts # Run facts collectors
# Log Management
driftless logs # Run log sources and outputs
# Agent Mode
driftless agent # Run in continuous monitoring mode
Configuration
Driftless supports multiple configuration formats and automatically detects configuration directories:
Directory Structure
Driftless checks for configuration in this order:
- System-wide:
/etc/driftless/(highest priority) - User-specific:
~/.config/driftless/(fallback)
System-wide Configuration (/etc/driftless/)
sudo mkdir -p /etc/driftless
sudo chown -R driftless:driftless /etc/driftless
User Configuration (~/.config/driftless/)
mkdir -p ~/.config/driftless
Directory Layout
driftless/
├── config/
│ ├── apply.yml # Configuration operation definitions
│ ├── facts.yml # Facts collector settings
│ ├── logs.yml # Log source/output settings
│ └── agent.yml # Agent mode configuration
├── plugins/ # Custom plugins directory
└── secrets.yml # Secrets file (YAML format)
secrets.env # Secrets file (environment format)
Configuration Formats
All configuration files support YAML, JSON, or TOML formats. Files can be named with any extension (.yml, .yaml, .json, .toml) and Driftless will auto-detect the format.
Development
Building
cargo build --release
Testing
cargo test
Contributing
- Fork the repository
- Create a feature branch
- Add tests for new functionality
- Update documentation
- Submit a pull request
For detailed development information, see the Developer Guide.
License
Licensed under the Apache License, Version 2.0.
Developer Documentation
This section contains documentation for developers working on the Driftless project itself.
Contents
- AI Contribution - Information about AI models used in development
- Development Guide - Setting up your development environment and running validation checks
- Podman Troubleshooting - Common Podman + Dev Containers troubleshooting steps
- Release Workflow - Documentation on the release process
- Repository Settings - Managing repository settings programmatically
- Rust API Documentation - Link to auto-generated Rust API documentation
AI Contribution
This document outlines the AI models and tools used in the development of the Driftless project.
Code and Documentation
Nearly all code and documentation in the Driftless project was written by Grok Code Fast 1, an AI model developed by xAI, using Visual Studio Code with the Copilot extension. This includes:
- Rust source code in the
src/directory - Documentation in the
docs/directory - Configuration files and scripts
- Test files
GitHub Actions Workflows and Code Reviews
The GitHub Actions workflows were created and debugged using Claude Sonnet 4.5 integrated with the GitHub Copilot web interface. This includes:
- CI/CD pipelines in
.github/workflows/ - Workflow validation and debugging
- Code review assistance for PRs
Disclaimer
As an experimental, AI-developed project, Driftless may contain bugs and other shortcomings. Please report any issues as GitHub Issues.
Design Document for Driftless
Goals
- Streamlined system configuration, inventory, and monitoring agent
- Built for GitOps using a single repository of truth
- Tiny, efficient, and memory-safe executable (via Rust)
Features
- Configuration management
- Applies declarative system configs idempotently
- Requires list of configuration tasks
- Writes audit/diff logs to a local directory (i.e. NFS), HTTP endpoint, or S3 bucket
- Alternative to Ansible/Chef/Puppet
- Crates:
git2 nix reqwest rust-s3 serde
- Metrics gathering
- Gather host metrics (CPU, mem, disk, etc)
- Requires list of metrics to collect, poll interval, and thresholds
- Export metrics via
/metricsendpoint or push to S3 bucket - Alternative to Prometheus Node Exporter
- Crates:
prometheus rust-s3 sysinfo
- Log collection
- Tails and forwards logs
- Requires list of paths to tail and filters/parsers to use
- Writes logs to local directory (i.e. NFS), S3 bucket, syslog, or HTTP (i.e. ELK stack)
- Alternative to FileBeat
- Crates:
flate2 reqwest rust-s3
- Secrets management
- Remove secrets from configuration using variable substitution
- Reads secrets from environment variables and
envfiles outside input directory - Crates:
secret_vault
Design details
- Runs as a CLI
- Uses the given directory of configuration files as input (i.e. cloned Git repo)
- Configuration files use an JSON, TOML, or YAML syntax (auto-detect file extension)
- Configuration file schemas include:
apply: Idempotent system configuration tasksfacts: Facts, metrics, and other information gathering taskslogs: Log file tailing and forwarding tasks
- Default configuration directory:
/etc/driftless/config(system-wide) or~/.config/driftless/config(user) - Secrets passed via environment variables,
/etc/driftless/secrets.yml,/etc/driftless/secrets.env,~/.config/driftless/secrets.yml, or~/.config/driftless/secrets.env - Sub-command names mirror file schemas (i.e.
apply,facts,logs) for running tasks- The
applysub-command should include a--dry-runflag or similar to only output diffs
- The
- Additional
agentsub-command activates agent mode- Gathers built-in facts
- If configured, starts Prometheus metrics endpoint (i.e.
0.0.0.0:8000/metrics) - Starts an event loop
- Reads configuration files from directory
- Gathers configured additional facts and metrics at requested interval
- Starts collecting and forwarding configured log files
- Runs apply tasks at requested interval
Potential Future Enhancements
- Remote secrets provider support (AWS, GCP, Vault/OpenBao, etc)
- Distributed scheduling/task management
- Inventory reporting (hardware/software)
- Reusable modules
- Extensible with plugins via
wasmtimecrate - Plugin registry and download manager
Nix Integration Opportunities
The nix crate provides Rust bindings to *nix APIs. We can leverage this for:
High Priority (System-level operations)
- Process management - Enhanced process monitoring/control
- Signal handling - Send signals to processes
- File permissions - More robust Unix permission handling
- User/group operations - Lower-level user/group management
- Mount operations - Filesystem mounting
- Network interfaces - Network interface management
- System information - Detailed system/hardware info
Medium Priority (Infrastructure automation)
- Sysctl operations - Kernel parameter management
- Capability management - Linux capabilities
- Namespace operations - Container/namespace management
- Cgroup management - Control groups
- Inotify monitoring - File system monitoring
- Socket operations - Unix domain sockets
Low Priority (Advanced features)
- ACL management - Access control lists
- Extended attributes - File extended attributes
- Audit operations - System audit logging
- KVM operations - Kernel-based virtual machines
TODO
- Create task prompts in the TODO list that adds support for macOS and Windows operating systems in all applicable areas of the codebase
- Review usages of
dead_code,unsafe, andunused_importsto silence warnings and determine if code should be used or cleaned up according to Rust best practices. Use this opportunity to cleanup unused code and dependencies to reduce release binary size and improve maintainability. - Review the codebase for consistent error-handling patterns and improve as needed
- Ensure all dependencies in
Cargo.tomlare up-to-date with the latest stable versions - Review the codebase for usage of Rust best-practices and guidelines
- Review the codebase for safety and security vulnerabilities and apply mitigations as needed
- Ensure comprehensive test coverage and cleanup any clippy warnings. Tests should be written for the intent of the code not the implementation details.
- Review the auto-generated and manually-managed documentation in the
docs/directory and validate information is accurate against the current codebase. Look for cleanup, clarification, expansion, and reorganization opportunities. Ensure all auto-generated documentation contains a banner indicating it is auto-generated and should not be manually edited. - Perform a final review of the entire codebase, documentation, and project structure to ensure consistency, quality, and readiness for production use.
Development Guide
This guide covers the development workflow for contributing to Driftless.
Prerequisites
- Rust (stable, beta, or MSRV 1.92)
- Git
- Python 3.x (for documentation generation)
- Visual Studio Code with the Dev Containers extension (recommended)
- A container engine for devcontainers:
- Linux: Docker Engine or Podman
- macOS: Docker Desktop or Podman Desktop
- Windows: Docker Desktop or Podman Desktop (best effort only; see Windows notes below)
Platform Support Policy
- Linux: Fully supported for development and CI parity.
- macOS: Fully supported for development via devcontainers (Docker Desktop or Podman Desktop).
- Windows: Best effort support.
- We accept fixes for clear Windows-specific edge cases.
- We do not guarantee full parity for all local workflows.
- Windows user/runtime support for the binary is also best effort.
Devcontainers (Recommended)
Using a devcontainer is the recommended way to get a consistent toolchain across machines.
Linux
- Docker: install Docker Engine and verify
docker psworks. - Podman: install Podman and verify
podman psworks.
For Podman with VS Code Dev Containers, configure VS Code to use Podman:
{
"dev.containers.dockerPath": "podman"
}
macOS
- Docker Desktop: install Docker Desktop and verify
docker psworks. - Podman Desktop: install Podman Desktop, initialize/start the Podman machine, and verify
podman psworks.
For Podman with VS Code Dev Containers on macOS:
- Set VS Code to use Podman:
{
"dev.containers.dockerPath": "podman"
}
- Ensure a Docker-compatible socket is exposed (Podman Desktop normally configures this).
- Rebuild the devcontainer after changing engine settings.
Windows (Best Effort)
- Preferred path is VS Code + Dev Containers with Docker Desktop or Podman Desktop.
- WSL2-based development is typically more reliable than native Windows filesystem mounts.
- Limit expectations to Windows-specific edge-case fixes, not full parity for every local setup.
Open the Repository in a Devcontainer
- Clone the repository.
- Open it in VS Code.
- Run Dev Containers: Reopen in Container.
- After engine changes (Docker ↔ Podman, rootful ↔ rootless, or user changes), run a clean build:
cargo clean
Setting Up Your Development Environment
If you are not using a devcontainer, use this native setup:
- Clone the repository:
git clone https://github.com/driftless-hq/driftless.git
cd driftless
- Build the project:
cargo build
- Run tests:
cargo test
Resource Guidance for macOS/Windows VMs
Containerized Rust builds can be memory-intensive (especially linking tests and release binaries).
- If you see linker failures like
ld terminated with signal 9 [Killed], the VM/container likely hit OOM. - Increase memory available to Docker Desktop/Podman Desktop, and/or reduce Cargo parallelism:
CARGO_BUILD_JOBS=2 cargo test --all --quiet
- CI should continue using
cargo build --releasefor the smallest, most optimized binary. - For local builds on constrained machines, use the lower-memory profile:
cargo build --profile release-local -j 2
Or use the helper script:
./scripts/build-release-local.sh
- The validation script already respects
CARGO_BUILD_JOBSand defaults to a conservative value.
For Podman-specific setup and recovery steps, see Podman Devcontainer Troubleshooting.
Running Validation Checks
Before committing your changes, you should run the validation script to catch potential CI failures early:
./scripts/validate.sh
This script runs all the validation checks that are performed in the CI pipeline:
- Code Formatting Check - Ensures code follows Rust formatting standards (
cargo fmt --all -- --check) - Clippy Linter - Runs the Rust linter to catch common mistakes and enforce best practices (
cargo clippy -- -D warnings) - Documentation Validation - Verifies that generated documentation is up-to-date
By default, the script runs all checks and reports all failures. To exit immediately on the first failure, use the --fail-fast flag:
./scripts/validate.sh --fail-fast
Fixing Validation Issues
If validation checks fail, here’s how to fix them:
Formatting Issues
Run cargo fmt to automatically fix formatting:
cargo fmt --all
Clippy Warnings
Review the clippy output and fix the issues manually. The warnings will guide you on what needs to be changed.
Documentation Issues
Regenerate the documentation:
./scripts/generate-docs.sh
Building Documentation
To generate and view documentation locally:
# Generate all documentation
./scripts/generate-docs.sh
# View Rust API documentation in your browser
cargo doc --open
Running the Project Locally
# Run in development mode
cargo run -- --help
# Run with specific command
cargo run -- apply --dry-run
Submitting Changes
- Run validation checks:
./scripts/validate.sh - Commit your changes
- Push to your fork
- Create a pull request
The CI pipeline will automatically run the same validation checks on your pull request.
Plugin Development Guide
This guide covers creating, building, and deploying plugins for the Driftless system. Plugins are WebAssembly (WASM) modules that extend Driftless functionality with custom tasks, facts collectors, template extensions, and log processing components.
Overview
Driftless plugins are compiled to WebAssembly and run in a secure sandbox with strict resource limits and execution timeouts. Plugins communicate with the host system through a JSON-based API, ensuring cross-language compatibility.
Plugin Architecture
Security Model
Plugins run in a restricted WebAssembly environment with:
- Memory limits: 64MB per plugin instance (configurable)
- Execution timeouts: 30 seconds maximum (configurable)
- Fuel limits: 1 billion instructions per execution
- No host system access: No filesystem, network, or system calls
- Import validation: Dangerous imports are blocked
Plugin Types
Plugins can register the following component types:
- Tasks: Custom automation tasks (apply, facts, logs)
- Facts Collectors: System information gathering
- Template Extensions: Custom Jinja2 filters and functions
- Log Sources: Custom log data sources
- Log Parsers: Custom log parsing logic
- Log Filters: Custom log filtering rules
- Log Outputs: Custom log output destinations
Getting Started
Examples
Before diving into the details, check out our plugin examples that demonstrate complete working plugins in multiple languages:
- Rust: Custom tasks and template extensions
- JavaScript: Custom tasks with webpack bundling
- TypeScript: Type-safe template extensions
- Python: Facts collectors (experimental)
Each example includes source code, build instructions, and usage documentation.
Prerequisites
- Rust 1.92+ with
wasm32-wasitarget wasm-packfor building and packaging- Basic knowledge of WebAssembly concepts
Setting Up a Plugin Project
Create a new Rust library project:
cargo new --lib my-plugin
cd my-plugin
Add dependencies to Cargo.toml:
[package]
name = "my-plugin"
version = "0.1.0"
edition = "2021"
[lib]
crate-type = ["cdylib"]
[dependencies]
serde = { version = "1.0", features = ["derive"] }
serde_json = "1.0"
wasm-bindgen = "0.2"
[dependencies.driftless-plugin]
# Use local path during development
path = "../driftless/src/plugin_interface"
# Or use published crate when available
# version = "0.1"
Basic Plugin Structure
#![allow(unused)]
fn main() {
use serde_json::Value;
use wasm_bindgen::prelude::*;
// Export required plugin interface functions
#[wasm_bindgen]
extern "C" {
#[wasm_bindgen(js_namespace = console)]
fn log(s: &str);
}
// Helper macro for logging
macro_rules! console_log {
($($t:tt)*) => (log(&format_args!($($t)*).to_string()))
}
#[wasm_bindgen]
pub fn get_task_definitions() -> String {
let definitions = vec![
serde_json::json!({
"name": "my_custom_task",
"type": "apply",
"config_schema": {
"type": "object",
"properties": {
"message": {"type": "string"}
},
"required": ["message"]
}
})
];
serde_json::to_string(&definitions).unwrap()
}
#[wasm_bindgen]
pub fn execute_task(name: &str, config_json: &str) -> String {
match name {
"my_custom_task" => {
let config: Value = serde_json::from_str(config_json).unwrap();
let message = config["message"].as_str().unwrap();
console_log!("Executing custom task with message: {}", message);
// Task implementation here
serde_json::json!({
"status": "success",
"message": format!("Task executed with: {}", message)
}).to_string()
}
_ => serde_json::json!({
"status": "error",
"message": format!("Unknown task: {}", name)
}).to_string()
}
}
}
API Reference
Required Exports
All plugins must export these functions:
get_task_definitions() -> String
Returns a JSON array of task definitions.
Format:
[{
"name": "task_name",
"type": "apply|facts|logs",
"config_schema": {
"type": "object",
"properties": {...},
"required": [...]
}
}]
get_facts_collectors() -> String
Returns a JSON array of facts collector definitions.
get_template_extensions() -> String
Returns a JSON array of template extension definitions.
get_log_sources() -> String
Returns a JSON array of log source definitions.
get_log_parsers() -> String
Returns a JSON array of log parser definitions.
get_log_filters() -> String
Returns a JSON array of log filter definitions.
get_log_outputs() -> String
Returns a JSON array of log output definitions.
Execution Functions
execute_task(name: &str, config_json: &str) -> String
Execute a registered task.
Parameters:
name: Task nameconfig_json: JSON string of task configuration
Returns: JSON string with execution result or error
execute_facts_collector(name: &str, config_json: &str) -> String
Execute a facts collector.
execute_log_source(name: &str, config_json: &str) -> String
Execute a log source.
execute_log_parser(name: &str, config_json: &str, input: &str) -> String
Execute a log parser.
execute_log_filter(name: &str, config_json: &str, entry_json: &str) -> String
Execute a log filter.
execute_log_output(name: &str, config_json: &str, entry_json: &str) -> String
Execute a log output.
execute_template_filter(name: &str, config_json: &str, value_json: &str, args_json: &str) -> String
Execute a template filter.
execute_template_function(name: &str, config_json: &str, args_json: &str) -> String
Execute a template function.
Host Imports (Available)
host_log(level: &str, message: &str)
Log a message from the plugin.
Parameters:
level: “error”, “warn”, “info”, “debug”message: Log message string
host_get_timestamp() -> u64
Get current Unix timestamp.
Security Guidelines
Memory Management
- Plugins are limited to 64MB of memory per instance
- Avoid memory leaks by properly managing allocations
- Use stack-allocated data when possible
Execution Limits
- Plugins have a 30-second execution timeout
- CPU usage is limited to 1 billion instructions per execution
- Long-running operations should be split into smaller tasks
Input Validation
- Always validate input parameters
- Use JSON schemas for configuration validation
- Sanitize string inputs to prevent injection attacks
Safe Coding Practices
- Avoid unsafe Rust code
- Don’t use system calls or external libraries
- Don’t attempt to access host filesystem or network
- Use only the provided host import functions
Forbidden Imports
The following imports are blocked for security:
wasi_snapshot_preview1.*(when WASI is disabled)env.syscall*,env.system*(system calls)env.fd_*,env.path_*(filesystem access)env.sock*,env.net*(network access)
Building Plugins
Development Build
# Install wasm-pack if not already installed
cargo install wasm-pack
# Build for development
wasm-pack build --target web --out-dir pkg
Production Build
# Build optimized WASM module
wasm-pack build --target web --release --out-dir pkg
Cross-Platform Considerations
- Plugins run on the same platforms as Driftless (Linux, macOS, Windows)
- Use
wasm32-wasitarget for WASI support (if enabled) - Test on target platforms before release
Deployment
Plugin Directory Structure
Plugins should be placed in Driftless’s plugin directory:
~/.driftless/plugins/
├── my-plugin.wasm
├── another-plugin.wasm
└── ...
Configuration
Add plugin security configuration to plugins.toml:
[security]
max_memory = 67108864 # 64MB
fuel_limit = 1000000000 # 1B instructions
execution_timeout_secs = 30 # 30 seconds
allow_wasi = false # No WASI access
debug_enabled = false # No debug features
Registry Publishing
Plugins can be published to registries for distribution:
[[registries]]
name = "my-registry"
url = "https://plugins.example.com"
enabled = true
GitHub Actions Workflow
Create .github/workflows/release-plugin.yml for automated plugin building and publishing:
name: Release Plugin
on:
push:
tags:
- 'v*'
jobs:
build-and-release:
runs-on: ubuntu-latest
steps:
- name: Checkout code
uses: actions/checkout@v4
- name: Setup Rust
uses: dtolnay/rust-toolchain@stable
- name: Install wasm-pack
run: curl https://rustwasm.github.io/wasm-pack/installer/init.sh -sSf | sh
- name: Build WASM plugin
run: wasm-pack build --target web --release --out-dir pkg
- name: Create release archive
run: |
cd pkg
tar -czf ../my-plugin-${{ github.ref_name }}.tar.gz *
- name: Create GitHub Release
uses: softprops/action-gh-release@v1
with:
files: my-plugin-${{ github.ref_name }}.tar.gz
generate_release_notes: true
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
publish-to-registry:
runs-on: ubuntu-latest
if: github.event_name == 'release'
steps:
- name: Checkout code
uses: actions/checkout@v4
- name: Setup Rust
uses: dtolnay/rust-toolchain@stable
- name: Install wasm-pack
run: curl https://rustwasm.github.io/wasm-pack/installer/init.sh -sSf | sh
- name: Build and package
run: |
wasm-pack build --target web --release --out-dir pkg
cd pkg
# Create plugin metadata
echo '{"name":"my-plugin","version":"'${{ github.ref_name }}'","description":"My custom plugin"}' > plugin.json
- name: Upload to registry
run: |
# This would upload to your plugin registry
# Implementation depends on your registry API
echo "Plugin built and ready for registry upload"
Testing Plugins
Unit Tests
#![allow(unused)]
fn main() {
#[cfg(test)]
mod tests {
use super::*;
#[test]
fn test_task_definitions() {
let definitions: Vec<Value> = serde_json::from_str(&get_task_definitions()).unwrap();
assert!(!definitions.is_empty());
assert_eq!(definitions[0]["name"], "my_custom_task");
}
#[test]
fn test_task_execution() {
let config = r#"{"message": "test"}"#;
let result: Value = serde_json::from_str(&execute_task("my_custom_task", config)).unwrap();
assert_eq!(result["status"], "success");
}
}
}
Integration Testing
Create a test harness that loads and executes your plugin:
#![allow(unused)]
fn main() {
use wasmtime::{Engine, Module, Store};
#[test]
fn test_plugin_integration() {
let engine = Engine::default();
let module = Module::from_file(&engine, "pkg/my_plugin_bg.wasm").unwrap();
let mut store = Store::new(&engine, ());
// Test plugin loading and basic functionality
// ... test implementation
}
}
Best Practices
Performance
- Minimize memory allocations
- Use efficient data structures
- Avoid unnecessary string conversions
- Profile WASM execution time
Error Handling
- Return structured error responses
- Use appropriate HTTP status codes in JSON responses
- Log errors for debugging
Documentation
- Document all exported functions
- Provide JSON schema for configurations
- Include examples in documentation
Versioning
- Use semantic versioning
- Document breaking changes
- Test compatibility with Driftless versions
Troubleshooting
Common Issues
Plugin fails to load:
- Check WASM compilation target
- Verify all required exports are present
- Check for forbidden imports
Execution timeouts:
- Optimize algorithm complexity
- Split large operations
- Increase timeout limits (if allowed)
Memory limits exceeded:
- Reduce memory usage
- Use streaming for large data
- Increase memory limits (if allowed)
Security violations:
- Remove forbidden imports
- Use only allowed host functions
- Follow security guidelines
Debug Logging
Enable debug logging in plugin configuration:
[security]
debug_enabled = true
Use the host logging function:
#![allow(unused)]
fn main() {
console_log!("Debug message: {:?}", some_value);
}
Examples
Custom Task Plugin
See examples/custom-task-plugin/ for a complete example.
Template Filter Plugin
See examples/template-filter-plugin/ for custom Jinja2 filters.
Facts Collector Plugin
See examples/facts-collector-plugin/ for system information gathering.
Contributing
- Follow the security guidelines
- Include comprehensive tests
- Update documentation
- Use conventional commit messages
Support
- Check the Driftless documentation
- Open issues on GitHub for bugs or feature requests
Podman Devcontainer Troubleshooting
This guide covers common Podman + VS Code Dev Containers issues for Driftless development on Linux and macOS.
Scope
- Primary target: Linux and macOS developer workstations
- Windows: best-effort only (prefer WSL2 workflow)
Baseline Checks
Verify Podman is healthy before opening the devcontainer:
podman info
podman ps
If VS Code Dev Containers is configured to use Podman, confirm your VS Code setting:
{
"dev.containers.dockerPath": "podman"
}
Rootless vs Rootful
- Prefer rootless Podman for day-to-day development.
- Keep the devcontainer user non-root (
vscode) to avoid bind-mount ownership drift. - If you temporarily switch root mode or engine mode, rebuild and clean artifacts:
cargo clean
Socket and Engine Wiring
Dev Containers expects a Docker-compatible interface. With Podman Desktop this is typically configured for you, but if it breaks:
- Verify Podman engine works (
podman info). - Verify VS Code points to Podman (
dev.containers.dockerPath). - Rebuild the container from VS Code: Dev Containers: Rebuild Container.
Common Problems
Permission denied running test binaries
Symptoms:
Permission denied (os error 13)when running test executables intarget/debug/deps
Likely cause:
- Stale build artifacts after switching user/engine/root mode.
Fix:
cargo clean
./scripts/validate.sh
Linker killed (ld terminated with signal 9)
Symptoms:
- Linking fails with signal 9 during tests or release builds.
Likely cause:
- VM/container out-of-memory condition.
Fixes:
CARGO_BUILD_JOBS=2 cargo test --all --quiet
CARGO_BUILD_JOBS=2 cargo build --release
./scripts/build-release-local.sh
Also increase memory assigned to Podman Desktop when possible.
Use cargo build --release in CI/release pipelines; release-local is intended for local developer machines with tighter memory limits.
Devcontainer no longer starts cleanly
Recommended reset sequence:
- Rebuild container from VS Code.
- If still broken, remove and recreate the container.
- Run
cargo cleaninside the rebuilt container.
Team Recommendations
- Standardize on one documented Podman profile (CPU/memory and rootless mode).
- Keep
CARGO_BUILD_JOBSconservative for laptop-class machines. - Ask contributors to run
./scripts/validate.sh --fail-fastbefore opening PRs.
Release Workflow
This document explains the release process for Driftless, including how to trigger releases manually or via version tags.
Table of Contents
Overview
Driftless uses a separate release workflow that is independent from the main CI pipeline. Releases can be triggered in two ways:
- Manually using GitHub Actions
workflow_dispatch(with customizable options) - Automatically by pushing a version tag matching the pattern
vX.Y.Z
Rationale
The release process is separated from the standard CI workflow for several important reasons:
1. Performance and Efficiency
- Release builds are time-consuming, especially when building binaries for multiple platforms
- Standard CI/CD checks (linting, testing, formatting) run frequently on every push and PR
- Separating releases keeps the main CI pipeline fast and responsive
- Developers get quick feedback on code quality without waiting for unnecessary release builds
2. Resource Optimization
- Release builds consume significant compute resources (cross-compilation, multi-platform builds)
- Not every merge to
mainrequires a release - Running release builds only when needed reduces GitHub Actions usage and costs
3. Clear Separation of Concerns
- CI pipeline focuses on validation: ensuring code quality, passing tests, and meeting standards
- Release pipeline focuses on distribution: building artifacts and creating GitHub releases
- This separation makes workflows easier to understand, maintain, and debug
4. Flexibility and Control
- Manual releases allow for intentional version bumps and controlled release schedules
- Tag-based releases enable GitOps-style workflows
- Draft and pre-release options provide staging mechanisms
Triggering a Release
Manual Release via workflow_dispatch
Manual releases provide the most control and are ideal for planned releases.
Step-by-Step Process
-
Navigate to GitHub Actions
- Go to your repository on GitHub
- Click on the “Actions” tab
- Select “Release” from the workflows list on the left
-
Trigger the Workflow
- Click “Run workflow” button
- Select the branch (typically
main)
-
Configure Release Options
The workflow provides several input options:
Input Description Required Default versionSpecific version to release (e.g., 0.2.0). Leave empty to auto-bump.No - bumpVersion bump type if no version specified ( patch,minor,major)No patchcreate_tagWhether to create and push a git tag No trueprereleaseMark the release as a pre-release No falsedraftCreate the release as a draft No falseExample Scenarios
Scenario 1: Patch Release (auto-bump)
- Leave
versionempty - Set
bumptopatch - Leave other options at defaults
- This will bump from
0.1.0→0.1.1
Scenario 2: Specific Version
- Set
versionto0.2.0 - Leave other options at defaults
- This will release exactly version
0.2.0
Scenario 3: Pre-release
- Set
versionto0.2.0-beta.1 - Set
prereleasetotrue - Set
drafttofalse - This creates a pre-release that’s visible but marked as unstable
Scenario 4: Draft Release
- Set
versionto1.0.0 - Set
drafttotrue - This creates a draft release that you can review and publish later
- Leave
-
Monitor Progress
- The workflow will appear in the Actions tab
- Click on the running workflow to see real-time logs
- The workflow consists of:
- Build jobs: Compile binaries for each supported platform
- Release job: Create GitHub release with artifacts
-
Verify the Release
- Once complete, navigate to the “Releases” page
- Verify the release version, notes, and artifacts
Automatic Release via Version Tags
Tag-based releases enable a GitOps workflow where pushing a version tag automatically triggers a release.
Tag Format
Tags must follow semantic versioning with a v prefix:
- Standard releases:
vX.Y.Z(e.g.,v0.1.0,v1.2.3) - Pre-releases:
vX.Y.Z-<label>(e.g.,v0.2.0-beta.1,v1.0.0-rc.2)
Where:
X= Major versionY= Minor versionZ= Patch version<label>= Optional pre-release identifier
Step-by-Step Process
-
Ensure Your Local Repository is Up-to-Date
git checkout main git pull origin main -
Update the Version in Cargo.toml (if needed)
# Edit Cargo.toml manually or use cargo-release sed -i 's/version = "0.1.0"/version = "0.2.0"/' Cargo.toml # Commit the change git add Cargo.toml git commit -m "chore: bump version to 0.2.0" git push origin main -
Create and Push the Tag
# Create an annotated tag git tag -a v0.2.0 -m "Release v0.2.0" # Push the tag to GitHub git push origin v0.2.0 -
Automatic Release
- Pushing the tag automatically triggers the release workflow
- GitHub Actions will build binaries and create a release
- The release will be published automatically (not a draft)
Alternative: Using cargo-release
For a more integrated approach, you can use cargo-release:
# Install cargo-release (if not already installed)
cargo install cargo-release
# Perform a patch release (0.1.0 → 0.1.1)
cargo release patch --execute
# Perform a minor release (0.1.0 → 0.2.0)
cargo release minor --execute
# Perform a major release (0.1.0 → 1.0.0)
cargo release major --execute
# Create a specific version
cargo release --execute 0.3.0
cargo-release will:
- Update the version in
Cargo.toml - Create a git commit
- Create and push a git tag
- This tag push will trigger the release workflow
Release Process Details
What Happens During a Release
-
Build Phase (runs in parallel)
- For each supported platform:
- Check out the code
- Set up the Rust toolchain for the target platform
- Build the release binary with optimizations
- Upload the binary as an artifact
- For each supported platform:
-
Release Phase (runs after all builds complete)
- Download all build artifacts
- Determine the release version
- Generate release notes from git history
- Create a GitHub release with:
- Version tag
- Release notes
- Binary artifacts for download
- Optional draft/pre-release flags
Generated Artifacts
For each supported platform, a binary artifact is created:
driftless-linux-amd64- Linux x86_64 (currently implemented)driftless-linux-arm64- Linux ARM64 (planned)driftless-macos-amd64- macOS Intel (planned)driftless-macos-arm64- macOS Apple Silicon (planned)driftless-windows-amd64.exe- Windows x86_64 (planned)driftless-windows-arm64.exe- Windows ARM64 (planned)
Release Notes
Release notes are automatically generated and include:
- Version number
- List of commits since the previous release
- Installation instructions for each platform
- Links to the binary artifacts
Platform Support
Currently Implemented
- ✅ Linux amd64 (x86_64-unknown-linux-gnu)
Planned (Ready to Enable with Additional Setup)
The workflow includes matrix entries for these platforms, but they require additional setup:
- ⏳ Linux arm64 (aarch64-unknown-linux-gnu) - Requires self-hosted runner or GitHub-hosted ubuntu-24.04-arm runner when available
- ⏳ macOS amd64 (x86_64-apple-darwin) - Available on
macos-13GitHub-hosted runner - ⏳ macOS arm64 (aarch64-apple-darwin) - Available on
macos-latestGitHub-hosted runner - ⏳ Windows amd64 (x86_64-pc-windows-msvc) - Available on
windows-latestGitHub-hosted runner - ⏳ Windows arm64 (aarch64-pc-windows-msvc) - Requires self-hosted runner or GitHub-hosted windows-11-arm runner when available
Note: Some platform combinations (ubuntu-24.04-arm, windows-11-arm) may require self-hosted runners or may not be available yet on GitHub Actions. To enable a platform, verify the runner is available and update the skip_build flag from true to false in the release workflow’s build matrix.
Troubleshooting
Release Workflow Fails
Problem: The release workflow fails during the build phase.
Solutions:
- Check the GitHub Actions logs for specific error messages
- Ensure all tests pass in the CI workflow before triggering a release
- Verify that the version number is valid semantic versioning
- Check that there are no uncommitted changes in the repository
Tag Already Exists
Problem: Cannot push a tag because it already exists.
Solutions:
- List existing tags:
git tag -l - Delete the local tag:
git tag -d vX.Y.Z - Delete the remote tag (if needed):
git push origin :refs/tags/vX.Y.Z - Create a new tag with a different version
Version Mismatch
Problem: The version in Cargo.toml doesn’t match the git tag.
Solutions:
- Ensure you’ve updated
Cargo.tomlbefore creating the tag - Or use
cargo-releasewhich handles this automatically - If using manual workflow dispatch, specify the version explicitly
Binary Not Building
Problem: A specific platform’s binary fails to build.
Solutions:
- Check if the platform is currently implemented (see Platform Support section)
- Ensure the
skip_buildflag is set correctly in the workflow - Verify that the build matrix includes the correct target triple
- Check platform-specific build logs for compilation errors
Release Not Appearing
Problem: Tag was pushed but no release was created.
Solutions:
- Verify the tag matches the pattern
vX.Y.Z(withvprefix) - Check the Actions tab to see if the workflow ran
- Look for errors in the workflow logs
- Ensure GitHub Actions has write permissions for releases
Best Practices
- Always test on main first: Ensure the main branch CI passes before creating a release
- Use semantic versioning: Follow the
MAJOR.MINOR.PATCHconvention - Write meaningful release notes: While auto-generated, consider editing them for clarity
- Use pre-releases for testing: Mark unstable versions as pre-releases
- Create draft releases for major versions: Review major releases before publishing
- Tag from main branch: Always create release tags from the main branch
- Keep version in sync: Ensure
Cargo.tomlversion matches the tag
Examples
Example 1: Regular Patch Release
# Update version
sed -i 's/version = "0.1.0"/version = "0.1.1"/' Cargo.toml
git add Cargo.toml
git commit -m "chore: bump version to 0.1.1"
git push origin main
# Create and push tag
git tag -a v0.1.1 -m "Release v0.1.1: Bug fixes and improvements"
git push origin v0.1.1
Example 2: Major Release via Workflow Dispatch
- Go to Actions → Release → Run workflow
- Set
versionto1.0.0 - Set
drafttotrue(to review before publishing) - Click “Run workflow”
- Review the draft release, edit notes if needed
- Publish the release
Example 3: Beta Release
# Update version
sed -i 's/version = "0.2.0"/version = "0.3.0-beta.1"/' Cargo.toml
git add Cargo.toml
git commit -m "chore: bump version to 0.3.0-beta.1"
git push origin main
# Create and push tag
git tag -a v0.3.0-beta.1 -m "Release v0.3.0-beta.1: Beta testing"
git push origin v0.3.0-beta.1
Or via workflow dispatch:
- Go to Actions → Release → Run workflow
- Set
versionto0.3.0-beta.1 - Set
prereleasetotrue - Click “Run workflow”
Additional Resources
Repository Settings Management
This document describes how repository settings are managed programmatically using GitHub Actions.
Overview
The driftless-hq/driftless repository uses an automated workflow to enforce repository settings consistently. Settings are defined in .github/repo-settings.yml and applied automatically when changes are made to the .github directory.
Configuration File
All repository settings are defined in .github/repo-settings.yml. This file includes:
Repository Settings
- Basic Information: Description, homepage URL, topics
- Features: Issues, Wiki, Projects, Downloads
- Merge Settings: Squash merge, merge commits, rebase merging, auto-merge
- Branch Management: Auto-delete branches after merge
Branch Protection
Branch protection rules for the main branch include:
-
Pull Request Reviews
- Minimum number of required approvals (default: 1)
- Dismiss stale reviews on new commits
- Code owner review requirements
-
Status Checks
- Required checks that must pass before merging:
- Test (ubuntu-latest, amd64, stable)
- Test (ubuntu-latest, amd64, beta)
- Test (ubuntu-latest, amd64, 1.92)
- Security Audit
- Unused Dependencies
- Outdated Dependencies
- Build Documentation
- Require branches to be up to date before merging
- Required checks that must pass before merging:
-
Additional Protections
- Require conversation resolution before merging
- Prevent force pushes
- Prevent deletions
- Optional: Require linear history
- Optional: Require signed commits
GitHub Pages
- Build Type: GitHub Actions (not branch-based)
- Source: Automatically deployed from workflow
Security
- Vulnerability Alerts: Enabled
- Automated Security Fixes: Enabled (Dependabot)
Enforcement Workflow
The .github/workflows/enforce-repo-settings.yml workflow automatically applies settings when:
- Changes are pushed to the
mainbranch that modify files in.github/ - The workflow is manually triggered via
workflow_dispatch
Workflow Steps
- Checkout: Retrieves the repository code
- Read Settings: Validates that
.github/repo-settings.ymlexists - Install Tools: Installs
yqfor YAML parsing - Apply Settings: Uses GitHub API to update:
- Repository metadata and features
- Repository topics
- Branch protection rules
- Security settings
- Verify: Confirms settings were applied correctly
Making Changes
To modify repository settings:
- Edit
.github/repo-settings.ymlwith your desired changes - Create a pull request
- After the PR is merged to
main, the workflow will automatically apply the new settings
Example: Change Required Approvals
branch_protection:
main:
required_pull_request_reviews:
required_approving_review_count: 2 # Changed from 1 to 2
Example: Add a New Required Status Check
branch_protection:
main:
required_status_checks:
contexts:
- "Test (ubuntu-latest, amd64, stable)"
- "My New Check" # Add your new check here
Permissions
The workflow uses the default GITHUB_TOKEN which has limited permissions. Some settings may require:
- Repository admin access
- A Personal Access Token (PAT) with
repoandadmin:repo_hookscopes
If the workflow fails with permission errors, consider:
- Using a PAT stored as a repository secret
- Granting additional permissions to the default token (if supported by GitHub)
- Applying sensitive settings manually through the GitHub UI
Troubleshooting
Workflow Fails with Permission Errors
Issue: The workflow cannot apply certain settings due to insufficient permissions.
Solution:
- Some settings require repository admin access
- The default
GITHUB_TOKENmay not have sufficient permissions - Consider using a PAT or applying settings manually
Settings Not Applied
Issue: Changes to .github/repo-settings.yml don’t trigger the workflow.
Solution:
- Ensure changes are merged to the
mainbranch - Check that the workflow file exists at
.github/workflows/enforce-repo-settings.yml - Manually trigger the workflow using the “Actions” tab in GitHub
Status Checks Not Found
Issue: Branch protection complains that status checks don’t exist.
Solution:
- Status checks must run at least once before they can be required
- Create a test PR to trigger CI workflows
- After workflows run, the checks will be available
Manual Settings Application
To manually apply settings without pushing to main:
- Go to the repository’s “Actions” tab
- Select “Enforce Repository Settings” workflow
- Click “Run workflow”
- Select the
mainbranch - Click “Run workflow” button
Related Documentation
- Branch Protection Setup Guide
- GitHub Docs: Managing Branch Protection
- GitHub Docs: Repository Settings API
Rust API Documentation
The complete Rust API documentation is available in the generated rustdoc:
Note: The API documentation is generated from the Rust source code in the
driftlesscrate. The documentation is automatically built and deployed via CI/CD.
This documentation includes:
- Complete API reference for all public modules
- Type definitions and trait implementations
- Function signatures and usage examples
- Module-level documentation
The API documentation is automatically generated from the Rust source code and includes detailed information about:
Main Modules
- apply - Configuration operations execution engine with idempotent operations
- facts - Facts collectors for system information and metrics gathering
- logs - Log sources and outputs for log collection and forwarding
- docs - Auto-generated documentation utilities
For the most up-to-date API information, please refer to the linked rustdoc above.
Reference
This section contains reference documentation for Driftless configuration operations, facts collectors, and log processing.
Driftless Facts Reference
Comprehensive reference for all available facts collectors in Driftless.
This documentation is auto-generated from the Rust source code.
Overview
Facts collectors gather system metrics and inventory information. Each collector corresponds to a specific type of system information or metric.
Facts Collectors (facts)
Facts collectors gather system metrics and inventory information. Each collector corresponds to a specific type of system information or metric.
Collector Configuration
All facts collectors support common configuration fields for controlling collection behavior:
name: Collector name (used for metric names)enabled: Whether this collector is enabled (default: true)poll_interval: Poll interval in seconds (how often to collect this metric)labels: Additional labels for this collector
CPU Metrics
cpu
Description: Collect CPU usage, frequency, temperature, and load average metrics
Required Fields:
-
base(BaseCollector): No description available -
collect(CpuCollectOptions): CPU metrics to collect -
name(String): Collector name (used for metric names) -
poll_interval(u64): Poll interval in seconds (how often to collect this metric) -
thresholds(CpuThresholds): Thresholds for alerts
Optional Fields:
-
enabled(bool): Whether this collector is enabled (default: true) -
labels(HashMap<String, String>): Additional labels for this collector
Examples:
Basic CPU metrics collection:
YAML Format:
type: cpu
name: cpu
poll_interval: 30
collect:
usage: true
per_core: true
frequency: true
temperature: true
load_average: true
thresholds:
usage_warning: 80.0
usage_critical: 95.0
temp_warning: 70.0
temp_critical: 85.0
JSON Format:
{
"type": "cpu",
"name": "cpu",
"poll_interval": 30,
"collect": {
"usage": true,
"per_core": true,
"frequency": true,
"temperature": true,
"load_average": true
},
"thresholds": {
"usage_warning": 80.0,
"usage_critical": 95.0,
"temp_warning": 70.0,
"temp_critical": 85.0
}
}
TOML Format:
[[collectors]]
type = "cpu"
name = "cpu"
poll_interval = 30
[collectors.collect]
usage = true
per_core = true
frequency = true
temperature = true
load_average = true
[collectors.thresholds]
usage_warning = 80.0
usage_critical = 95.0
temp_warning = 70.0
temp_critical = 85.0
Output:
cpu_count: 4
usage_percent: 45.2
usage_warning: false
usage_critical: false
cores:
- core_id: 0
usage_percent: 42.1
frequency_mhz: 2400
- core_id: 1
usage_percent: 48.3
frequency_mhz: 2400
frequency_mhz: 2400.0
temperature_celsius: null
temperature_available: false
temp_warning: false
temp_critical: false
load_average:
"1m": 1.25
"5m": 1.15
"15m": 1.08
Command Output
command
Description: Execute custom commands and collect their output as facts
Required Fields:
-
base(BaseCollector): No description available -
command(String): Command to execute -
env(HashMap<String, String>): Environment variables -
format(CommandOutputFormat): Expected output format -
name(String): Collector name (used for metric names) -
poll_interval(u64): Poll interval in seconds (how often to collect this metric)
Optional Fields:
-
cwd(Option): Working directory for command -
enabled(bool): Whether this collector is enabled (default: true) -
labels(HashMap<String, String>): Additional labels for this collector
Examples:
Basic command output collection:
YAML Format:
type: command
name: uptime
command: uptime -p
format: text
labels:
category: system
JSON Format:
{
"type": "command",
"name": "uptime",
"command": "uptime -p",
"format": "text",
"labels": {
"category": "system"
}
}
TOML Format:
[[collectors]]
type = "command"
name = "uptime"
command = "uptime -p"
format = "text"
[collectors.labels]
category = "system"
JSON command output parsing:
YAML Format:
type: command
name: docker_stats
command: docker stats --no-stream --format json
format: json
cwd: /tmp
env:
DOCKER_HOST: unix:///var/run/docker.sock
JSON Format:
{
"type": "command",
"name": "docker_stats",
"command": "docker stats --no-stream --format json",
"format": "json",
"cwd": "/tmp",
"env": {
"DOCKER_HOST": "unix:///var/run/docker.sock"
}
}
TOML Format:
[[collectors]]
type = "command"
name = "docker_stats"
command = "docker stats --no-stream --format json"
format = "json"
cwd = "/tmp"
[collectors.env]
DOCKER_HOST = "unix:///var/run/docker.sock"
Output:
command: "docker stats --no-stream --format json"
exit_code: 0
output:
- container: "web_server"
cpu_percent: "5.2"
memory_usage: "128MiB / 1GiB"
net_io: "1.2kB / 3.4kB"
- container: "database"
cpu_percent: "2.1"
memory_usage: "256MiB / 2GiB"
net_io: "500B / 1.2kB"
labels:
category: monitoring
Key-value command output parsing:
YAML Format:
type: command
name: system_info
command: echo "hostname=$(hostname)\nos_version=$(cat /etc/os-release | grep PRETTY_NAME | cut -d'=' -f2 | tr -d '\"')\nuptime=$(uptime -p)"
format: key_value
labels:
category: system
JSON Format:
{
"type": "command",
"name": "system_info",
"command": "echo \"hostname=$(hostname)\\nos_version=$(cat /etc/os-release | grep PRETTY_NAME | cut -d'=' -f2 | tr -d '\\\"')\\nuptime=$(uptime -p)\"",
"format": "key_value",
"labels": {
"category": "system"
}
}
TOML Format:
[[collectors]]
type = "command"
name = "system_info"
command = "echo \"hostname=$(hostname)\\nos_version=$(cat /etc/os-release | grep PRETTY_NAME | cut -d'=' -f2 | tr -d '\\\"')\\nuptime=$(uptime -p)\""
format = "key_value"
[collectors.labels]
category = "system"
Output:
command: "echo \"hostname=$(hostname)\\nos_version=$(cat /etc/os-release | grep PRETTY_NAME | cut -d'=' -f2 | tr -d '\\\"')\\nuptime=$(uptime -p)\""
exit_code: 0
output:
hostname: "web-server-01"
os_version: "Ubuntu 22.04.3 LTS"
uptime: "up 2 weeks, 3 days, 4 hours"
labels:
category: system
Text command output (default):
YAML Format:
type: command
name: disk_usage
command: df -h /
format: text
labels:
category: storage
JSON Format:
{
"type": "command",
"name": "disk_usage",
"command": "df -h /",
"format": "text",
"labels": {
"category": "storage"
}
}
TOML Format:
[[collectors]]
type = "command"
name = "disk_usage"
command = "df -h /"
format = "text"
[collectors.labels]
category = "storage"
Output:
command: "df -h /"
exit_code: 0
stdout: |
Filesystem Size Used Avail Use% Mounted on
/dev/sda1 50G 15G 33G 31% /
labels:
category: storage
Command with environment variables and working directory:
YAML Format:
type: command
name: custom_script
command: ./check_service.sh
format: json
cwd: /opt/myapp
env:
SERVICE_NAME: myapp
LOG_LEVEL: info
labels:
category: application
JSON Format:
{
"type": "command",
"name": "custom_script",
"command": "./check_service.sh",
"format": "json",
"cwd": "/opt/myapp",
"env": {
"SERVICE_NAME": "myapp",
"LOG_LEVEL": "info"
},
"labels": {
"category": "application"
}
}
TOML Format:
[[collectors]]
type = "command"
name = "custom_script"
command = "./check_service.sh"
format = "json"
cwd = "/opt/myapp"
[collectors.env]
SERVICE_NAME = "myapp"
LOG_LEVEL = "info"
[collectors.labels]
category = "application"
Output:
command: "./check_service.sh"
exit_code: 0
output:
service_status: "running"
uptime_seconds: 3600
version: "1.2.3"
health_checks:
- name: "database"
status: "ok"
- name: "cache"
status: "ok"
labels:
category: application
Disk Metrics
disk
Description: Collect disk space and I/O statistics for mounted filesystems
Required Fields:
-
base(BaseCollector): No description available -
collect(DiskCollectOptions): Disk metrics to collect -
devices(Vec): Disk devices to monitor (empty = all) -
mount_points(Vec): Mount points to monitor (empty = all) -
name(String): Collector name (used for metric names) -
poll_interval(u64): Poll interval in seconds (how often to collect this metric) -
thresholds(DiskThresholds): Thresholds for alerts
Optional Fields:
-
enabled(bool): Whether this collector is enabled (default: true) -
labels(HashMap<String, String>): Additional labels for this collector
Examples:
Basic disk metrics collection:
YAML Format:
type: disk
name: disk
devices: ["/dev/sda", "/dev/sdb"]
mount_points: ["/", "/home", "/var"]
collect:
total: true
used: true
free: true
available: true
percentage: true
io: true
thresholds:
usage_warning: 80.0
usage_critical: 90.0
JSON Format:
{
"type": "disk",
"name": "disk",
"devices": ["/dev/sda", "/dev/sdb"],
"mount_points": ["/", "/home", "/var"],
"collect": {
"total": true,
"used": true,
"free": true,
"available": true,
"percentage": true,
"io": true
},
"thresholds": {
"usage_warning": 80.0,
"usage_critical": 90.0
}
}
TOML Format:
[[collectors]]
type = "disk"
name = "disk"
devices = ["/dev/sda", "/dev/sdb"]
mount_points = ["/", "/home", "/var"]
[collectors.collect]
total = true
used = true
free = true
available = true
percentage = true
io = true
[collectors.thresholds]
usage_warning = 80.0
usage_critical = 90.0
Output:
disks:
- device: "/dev/sda1"
mount_point: "/"
is_removable: false
total_bytes: 536870912000
total_mb: 512000
total_gb: 500
used_bytes: 268435456000
used_mb: 256000
used_gb: 250
free_bytes: 134217728000
free_mb: 128000
free_gb: 125
available_bytes: 107374182400
available_mb: 102400
available_gb: 100
usage_percent: 50
available_percent: 20
disk_pressure: "medium"
usage_warning: false
usage_critical: false
io_supported: false
labels:
storage_type: ssd
Memory Metrics
memory
Description: Collect memory usage statistics including total, used, free, and swap
Required Fields:
-
base(BaseCollector): No description available -
collect(MemoryCollectOptions): Memory metrics to collect -
name(String): Collector name (used for metric names) -
poll_interval(u64): Poll interval in seconds (how often to collect this metric) -
thresholds(MemoryThresholds): Thresholds for alerts
Optional Fields:
-
enabled(bool): Whether this collector is enabled (default: true) -
labels(HashMap<String, String>): Additional labels for this collector
Examples:
Basic memory metrics collection:
YAML Format:
type: memory
name: memory
collect:
total: true
used: true
free: true
available: true
swap: true
percentage: true
thresholds:
usage_warning: 85.0
usage_critical: 95.0
JSON Format:
{
"type": "memory",
"name": "memory",
"collect": {
"total": true,
"used": true,
"free": true,
"available": true,
"swap": true,
"percentage": true
},
"thresholds": {
"usage_warning": 85.0,
"usage_critical": 95.0
}
}
TOML Format:
[[collectors]]
type = "memory"
name = "memory"
[collectors.collect]
total = true
used = true
free = true
available = true
swap = true
percentage = true
[collectors.thresholds]
usage_warning = 85.0
usage_critical = 95.0
Output:
total_bytes: 8589934592
total_mb: 8192
total_gb: 8
used_bytes: 4294967296
used_mb: 4096
used_gb: 4
free_bytes: 2147483648
free_mb: 2048
free_gb: 2
available_bytes: 3221225472
available_mb: 3072
available_gb: 3
usage_percent: 50
available_percent: 37
memory_pressure: "low"
swap_total_bytes: 2147483648
swap_used_bytes: 536870912
swap_free_bytes: 1610612736
swap_total_mb: 2048
swap_used_mb: 512
swap_free_mb: 1536
swap_usage_percent: 25
swap_pressure: "low"
usage_warning: false
usage_critical: false
Network Metrics
network
Description: Collect network interface statistics and status information
Required Fields:
-
base(BaseCollector): No description available -
collect(NetworkCollectOptions): Network metrics to collect -
interfaces(Vec): Network interfaces to monitor (empty = all) -
name(String): Collector name (used for metric names) -
poll_interval(u64): Poll interval in seconds (how often to collect this metric)
Optional Fields:
-
enabled(bool): Whether this collector is enabled (default: true) -
labels(HashMap<String, String>): Additional labels for this collector
Examples:
Basic network metrics collection:
YAML Format:
type: network
name: network
interfaces: ["eth0", "wlan0"]
collect:
bytes: true
packets: true
errors: true
status: true
JSON Format:
{
"type": "network",
"name": "network",
"interfaces": ["eth0", "wlan0"],
"collect": {
"bytes": true,
"packets": true,
"errors": true,
"status": true
}
}
TOML Format:
[[collectors]]
type = "network"
name = "network"
interfaces = ["eth0", "wlan0"]
[collectors.collect]
bytes = true
packets = true
errors = true
status = true
Output:
interfaces:
- name: "eth0"
bytes_received: 1234567890
bytes_transmitted: 987654321
total_bytes: 2222222211
packets_received: 1234567
packets_transmitted: 987654
total_packets: 2222221
errors_on_received: 0
errors_on_transmitted: 0
total_errors: 0
status: "up"
- name: "lo"
bytes_received: 123456789
bytes_transmitted: 123456789
total_bytes: 246913578
packets_received: 123456
packets_transmitted: 123456
total_packets: 246912
errors_on_received: 0
errors_on_transmitted: 0
total_errors: 0
status: "up"
labels:
network_type: corporate
Process Metrics
process
Description: Collect process information and resource usage statistics
Required Fields:
-
base(BaseCollector): No description available -
collect(ProcessCollectOptions): Process metrics to collect -
name(String): Collector name (used for metric names) -
patterns(Vec): Process name patterns to monitor (empty = all processes) -
poll_interval(u64): Poll interval in seconds (how often to collect this metric)
Optional Fields:
-
enabled(bool): Whether this collector is enabled (default: true) -
labels(HashMap<String, String>): Additional labels for this collector
Examples:
Basic process metrics collection:
YAML Format:
type: process
name: process
patterns: ["nginx", "apache", "sshd"]
collect:
count: true
cpu: true
memory: true
status: true
JSON Format:
{
"type": "process",
"name": "process",
"patterns": ["nginx", "apache", "sshd"],
"collect": {
"count": true,
"cpu": true,
"memory": true,
"status": true
}
}
TOML Format:
[[collectors]]
type = "process"
name = "process"
patterns = ["nginx", "apache", "sshd"]
[collectors.collect]
count = true
cpu = true
memory = true
status = true
Output:
total_processes: 150
matched_processes: 3
processes:
- pid: 1234
name: "nginx"
cpu_percent: 5
memory_bytes: 104857600
memory_mb: 100
memory_gb: 0
status: "running"
command: "/usr/sbin/nginx"
parent_pid: 1
- pid: 1235
name: "nginx"
cpu_percent: 3
memory_bytes: 52428800
memory_mb: 50
memory_gb: 0
status: "running"
command: "/usr/sbin/nginx"
parent_pid: 1234
- pid: 5678
name: "apache2"
cpu_percent: 2
memory_bytes: 209715200
memory_mb: 200
memory_gb: 0
status: "sleeping"
command: "/usr/sbin/apache2"
parent_pid: 1
labels:
process_type: web_servers
System Information
system
Description: Collect system information including hostname, OS, kernel, uptime, and architecture
Required Fields:
-
base(BaseCollector): No description available -
collect(SystemCollectOptions): What system information to collect -
name(String): Collector name (used for metric names) -
poll_interval(u64): Poll interval in seconds (how often to collect this metric)
Optional Fields:
-
enabled(bool): Whether this collector is enabled (default: true) -
labels(HashMap<String, String>): Additional labels for this collector
Examples:
Basic system information collection:
YAML Format:
type: system
name: system
collect:
hostname: true
os: true
kernel: true
uptime: true
boot_time: true
arch: true
JSON Format:
{
"type": "system",
"name": "system",
"collect": {
"hostname": true,
"os": true,
"kernel": true,
"uptime": true,
"boot_time": true,
"arch": true
}
}
TOML Format:
[[collectors]]
type = "system"
name = "system"
[collectors.collect]
hostname = true
os = true
kernel = true
uptime = true
boot_time = true
arch = true
Output:
hostname: "myhost.example.com"
os: "linux"
os_family: "unix"
kernel_version: "5.15.0-91-generic"
uptime_seconds: 1234567
boot_time: 1706012345
cpu_arch: "x86_64"
Driftless Logs Reference
Comprehensive reference for all available log sources and outputs in Driftless.
This documentation is auto-generated from the Rust source code.
Overview
Log processors handle log collection and forwarding. Each processor corresponds to a specific log source or output destination.
Log Sources/Outputs (logs)
Log processors handle log collection and forwarding. Each processor corresponds to a specific log source or output destination.
Processor Configuration
All log processors support common configuration fields for controlling processing behavior:
enabled: Whether this processor is enabled (default: true)name: Processor name for identification
Log Outputs
console
Description: Output logs to stdout/stderr for debugging
Required Fields:
name(String): Processor name for identification
Optional Fields:
enabled(bool): Whether this processor is enabled (default: true)
Examples:
File log output:
YAML Format:
logs:
- type: file
path: /var/log/app.log
format: json
rotation:
size: 10MB
count: 5
JSON Format:
{
"logs": [
{
"type": "file",
"path": "/var/log/app.log",
"format": "json",
"rotation": {
"size": "10MB",
"count": 5
}
}
]
}
TOML Format:
[[logs]]
type = "file"
path = "/var/log/app.log"
format = "json"
[logs.rotation]
size = "10MB"
count = 5
Console log output:
YAML Format:
logs:
- type: console
format: text
level: info
JSON Format:
{
"logs": [
{
"type": "console",
"format": "text",
"level": "info"
}
]
}
TOML Format:
[[logs]]
type = "console"
format = "text"
level = "info"
Syslog log output:
YAML Format:
logs:
- type: syslog
facility: local0
severity: info
tag: driftless
server: 127.0.0.1:514
protocol: udp
JSON Format:
{
"logs": [
{
"type": "syslog",
"facility": "local0",
"severity": "info",
"tag": "driftless",
"server": "127.0.0.1:514",
"protocol": "udp"
}
]
}
TOML Format:
[[logs]]
type = "syslog"
facility = "local0"
severity = "info"
tag = "driftless"
server = "127.0.0.1:514"
protocol = "udp"
file
Description: Write logs to files with rotation and compression
Required Fields:
name(String): Processor name for identification
Optional Fields:
enabled(bool): Whether this processor is enabled (default: true)
Examples:
File log output:
YAML Format:
logs:
- type: file
path: /var/log/app.log
format: json
rotation:
size: 10MB
count: 5
JSON Format:
{
"logs": [
{
"type": "file",
"path": "/var/log/app.log",
"format": "json",
"rotation": {
"size": "10MB",
"count": 5
}
}
]
}
TOML Format:
[[logs]]
type = "file"
path = "/var/log/app.log"
format = "json"
[logs.rotation]
size = "10MB"
count = 5
Console log output:
YAML Format:
logs:
- type: console
format: text
level: info
JSON Format:
{
"logs": [
{
"type": "console",
"format": "text",
"level": "info"
}
]
}
TOML Format:
[[logs]]
type = "console"
format = "text"
level = "info"
Syslog log output:
YAML Format:
logs:
- type: syslog
facility: local0
severity: info
tag: driftless
server: 127.0.0.1:514
protocol: udp
JSON Format:
{
"logs": [
{
"type": "syslog",
"facility": "local0",
"severity": "info",
"tag": "driftless",
"server": "127.0.0.1:514",
"protocol": "udp"
}
]
}
TOML Format:
[[logs]]
type = "syslog"
facility = "local0"
severity = "info"
tag = "driftless"
server = "127.0.0.1:514"
protocol = "udp"
http
Description: Send logs to HTTP endpoints with authentication and retry
Required Fields:
name(String): Processor name for identification
Optional Fields:
enabled(bool): Whether this processor is enabled (default: true)
Examples:
File log output:
YAML Format:
logs:
- type: file
path: /var/log/app.log
format: json
rotation:
size: 10MB
count: 5
JSON Format:
{
"logs": [
{
"type": "file",
"path": "/var/log/app.log",
"format": "json",
"rotation": {
"size": "10MB",
"count": 5
}
}
]
}
TOML Format:
[[logs]]
type = "file"
path = "/var/log/app.log"
format = "json"
[logs.rotation]
size = "10MB"
count = 5
Console log output:
YAML Format:
logs:
- type: console
format: text
level: info
JSON Format:
{
"logs": [
{
"type": "console",
"format": "text",
"level": "info"
}
]
}
TOML Format:
[[logs]]
type = "console"
format = "text"
level = "info"
Syslog log output:
YAML Format:
logs:
- type: syslog
facility: local0
severity: info
tag: driftless
server: 127.0.0.1:514
protocol: udp
JSON Format:
{
"logs": [
{
"type": "syslog",
"facility": "local0",
"severity": "info",
"tag": "driftless",
"server": "127.0.0.1:514",
"protocol": "udp"
}
]
}
TOML Format:
[[logs]]
type = "syslog"
facility = "local0"
severity = "info"
tag = "driftless"
server = "127.0.0.1:514"
protocol = "udp"
s3
Description: Upload logs to S3 with batching and compression
Required Fields:
name(String): Processor name for identification
Optional Fields:
enabled(bool): Whether this processor is enabled (default: true)
Examples:
File log output:
YAML Format:
logs:
- type: file
path: /var/log/app.log
format: json
rotation:
size: 10MB
count: 5
JSON Format:
{
"logs": [
{
"type": "file",
"path": "/var/log/app.log",
"format": "json",
"rotation": {
"size": "10MB",
"count": 5
}
}
]
}
TOML Format:
[[logs]]
type = "file"
path = "/var/log/app.log"
format = "json"
[logs.rotation]
size = "10MB"
count = 5
Console log output:
YAML Format:
logs:
- type: console
format: text
level: info
JSON Format:
{
"logs": [
{
"type": "console",
"format": "text",
"level": "info"
}
]
}
TOML Format:
[[logs]]
type = "console"
format = "text"
level = "info"
Syslog log output:
YAML Format:
logs:
- type: syslog
facility: local0
severity: info
tag: driftless
server: 127.0.0.1:514
protocol: udp
JSON Format:
{
"logs": [
{
"type": "syslog",
"facility": "local0",
"severity": "info",
"tag": "driftless",
"server": "127.0.0.1:514",
"protocol": "udp"
}
]
}
TOML Format:
[[logs]]
type = "syslog"
facility = "local0"
severity = "info"
tag = "driftless"
server = "127.0.0.1:514"
protocol = "udp"
syslog
Description: Send logs to syslog with RFC compliance
Required Fields:
name(String): Processor name for identification
Optional Fields:
enabled(bool): Whether this processor is enabled (default: true)
Examples:
File log output:
YAML Format:
logs:
- type: file
path: /var/log/app.log
format: json
rotation:
size: 10MB
count: 5
JSON Format:
{
"logs": [
{
"type": "file",
"path": "/var/log/app.log",
"format": "json",
"rotation": {
"size": "10MB",
"count": 5
}
}
]
}
TOML Format:
[[logs]]
type = "file"
path = "/var/log/app.log"
format = "json"
[logs.rotation]
size = "10MB"
count = 5
Console log output:
YAML Format:
logs:
- type: console
format: text
level: info
JSON Format:
{
"logs": [
{
"type": "console",
"format": "text",
"level": "info"
}
]
}
TOML Format:
[[logs]]
type = "console"
format = "text"
level = "info"
Syslog log output:
YAML Format:
logs:
- type: syslog
facility: local0
severity: info
tag: driftless
server: 127.0.0.1:514
protocol: udp
JSON Format:
{
"logs": [
{
"type": "syslog",
"facility": "local0",
"severity": "info",
"tag": "driftless",
"server": "127.0.0.1:514",
"protocol": "udp"
}
]
}
TOML Format:
[[logs]]
type = "syslog"
facility = "local0"
severity = "info"
tag = "driftless"
server = "127.0.0.1:514"
protocol = "udp"
Driftless Configuration Reference
Comprehensive reference for all available configuration components in Driftless.
This documentation is auto-generated from the Rust source code.
Overview
Driftless provides three main configuration components that work together to manage systems:
- Configuration Operations (
apply): Define and enforce desired system state - Facts Collectors (
facts): Gather system metrics and inventory information - Log Sources/Outputs (
logs): Handle log collection and forwarding
Configuration Operations (apply)
Configuration operations define desired system state and are executed idempotently. Each operation corresponds to a specific aspect of system configuration management.
Task Result Registration and Conditions
All configuration operations support special fields for conditional execution and capturing results:
when: An optional expression (usually containing variables) that determines if the task should be executed. If the condition evaluates tofalse, the task is skipped.register: An optional variable name to capture the result of the task execution. The captured data varies by task type and can be used in subsequent tasks using template expansion (e.g.,{{ my_var.stdout }}). This field only appears in the documentation for tasks that provide output results.
Command Execution
command
Description: Command execution task
Required Fields:
-
command(String): Command to execute -
env(HashMap<String, String>): Environment variables -
exit_code(i32): Expected exit code (default: 0) -
idempotent(bool): Whether command should be idempotent (only run if not already applied) -
stream_output(bool): Whether to stream output in real-time (useful for long-running commands)
Optional Fields:
-
cwd(Option): Working directory for command execution -
description(Option): Optional description of what this task does Human-readable description of the task’s purpose. Used for documentation and can be displayed in logs or reports.
-
group(Option): Whether to run command as a specific group -
register(Option): Optional variable name to register the task result in -
user(Option): Whether to run command as a specific user -
when(Option): Optional condition to determine if the task should run
Registered Outputs:
changed(bool): Whether the command was actually runrc(i32): The exit code of the commandstderr(String): The standard error of the commandstdout(String): The standard output of the command
Examples:
Run a simple command:
YAML Format:
- type: command
description: "Update package list"
command: apt-get update
JSON Format:
{
"type": "command",
"description": "Update package list",
"command": "apt-get update"
}
TOML Format:
[[tasks]]
type = "command"
description = "Update package list"
command = "apt-get update"
Run command with specific working directory:
YAML Format:
- type: command
description: "Build application in project directory"
command: make build
cwd: /opt/myapp
JSON Format:
{
"type": "command",
"description": "Build application in project directory",
"command": "make build",
"cwd": "/opt/myapp"
}
TOML Format:
[[tasks]]
type = "command"
description = "Build application in project directory"
command = "make build"
cwd = "/opt/myapp"
Run command as specific user:
YAML Format:
- type: command
description: "Restart nginx service"
command: systemctl restart nginx
user: root
JSON Format:
{
"type": "command",
"description": "Restart nginx service",
"command": "systemctl restart nginx",
"user": "root"
}
TOML Format:
[[tasks]]
type = "command"
description = "Restart nginx service"
command = "systemctl restart nginx"
user = "root"
Register command output:
YAML Format:
- type: command
description: "Check system uptime"
command: uptime
register: uptime_result
- type: debug
msg: "The system uptime is: {{ uptime_result.stdout }}"
JSON Format:
[
{
"type": "command",
"description": "Check system uptime",
"command": "uptime",
"register": "uptime_result"
},
{
"type": "debug",
"msg": "The system uptime is: {{ uptime_result.stdout }}"
}
]
TOML Format:
[[tasks]]
type = "command"
description = "Check system uptime"
command = "uptime"
register = "uptime_result"
[[tasks]]
type = "debug"
msg = "The system uptime is: {{ uptime_result.stdout }}"
Idempotent command:
YAML Format:
- type: command
description: "Initialize database (idempotent)"
command: /opt/myapp/init-db.sh
idempotent: true
exit_code: 0
JSON Format:
{
"type": "command",
"description": "Initialize database (idempotent)",
"command": "/opt/myapp/init-db.sh",
"idempotent": true,
"exit_code": 0
}
TOML Format:
[[tasks]]
type = "command"
description = "Initialize database (idempotent)"
command = "/opt/myapp/init-db.sh"
idempotent = true
exit_code = 0
raw
Description: Execute commands without shell processing task
Required Fields:
-
args(Vec): Command arguments (argv[1..]) -
creates(bool): Whether the command creates resources. When enabled withcreates_files, the command will be skipped if any of the specified files/directories already exist (idempotency check). -
creates_files(Vec): Files/directories created by the command. Used with createsflag for idempotency - if any listed file/directory already exists, the command is skipped. -
environment(HashMap<String, String>): Environment variables -
executable(String): Command to execute (argv[0]) -
exit_codes(Vec): Expected exit codes (defaults to [0]) -
force(bool): Force command execution -
ignore_errors(bool): Whether to ignore errors -
removes(bool): Whether the command removes resources. When enabled withremoves_files, the command will be skipped if any of the specified files/directories don’t exist (idempotency check). -
removes_files(Vec): Files/directories removed by the command. Used with removesflag for idempotency - if any listed file/directory doesn’t exist, the command is skipped.
Optional Fields:
-
chdir(Option): Working directory for command execution -
description(Option): Optional description of what this task does Human-readable description of the task’s purpose. Used for documentation and can be displayed in logs or reports.
-
timeout(Option): Execution timeout in seconds -
when(Option): Optional condition to determine if the task should run
Examples:
Execute a simple command:
YAML Format:
- type: raw
description: "List directory contents"
executable: ls
args: ["-la", "/tmp"]
JSON Format:
{
"type": "raw",
"description": "List directory contents",
"executable": "ls",
"args": ["-la", "/tmp"]
}
TOML Format:
[[tasks]]
type = "raw"
description = "List directory contents"
executable = "ls"
args = ["-la", "/tmp"]
Execute command with environment variables:
YAML Format:
- type: raw
description: "Run command with environment"
executable: /usr/local/bin/myapp
args: ["--config", "/etc/myapp/config.json"]
environment:
DATABASE_URL: "postgresql://localhost/mydb"
LOG_LEVEL: "debug"
JSON Format:
{
"type": "raw",
"description": "Run command with environment",
"executable": "/usr/local/bin/myapp",
"args": ["--config", "/etc/myapp/config.json"],
"environment": {
"DATABASE_URL": "postgresql://localhost/mydb",
"LOG_LEVEL": "debug"
}
}
TOML Format:
[[tasks]]
type = "raw"
description = "Run command with environment"
executable = "/usr/local/bin/myapp"
args = ["--config", "/etc/myapp/config.json"]
environment = { DATABASE_URL = "postgresql://localhost/mydb", LOG_LEVEL = "debug" }
Execute command with timeout:
YAML Format:
- type: raw
description: "Run command with timeout"
executable: sleep
args: ["30"]
timeout: 10
ignore_errors: true
JSON Format:
{
"type": "raw",
"description": "Run command with timeout",
"executable": "sleep",
"args": ["30"],
"timeout": 10,
"ignore_errors": true
}
TOML Format:
[[tasks]]
type = "raw"
description = "Run command with timeout"
executable = "sleep"
args = ["30"]
timeout = 10
ignore_errors = true
Execute command in specific directory:
YAML Format:
- type: raw
description: "Run command in project directory"
executable: make
args: ["build"]
chdir: /opt/myproject
JSON Format:
{
"type": "raw",
"description": "Run command in project directory",
"executable": "make",
"args": ["build"],
"chdir": "/opt/myproject"
}
TOML Format:
[[tasks]]
type = "raw"
description = "Run command in project directory"
executable = "make"
args = ["build"]
chdir = "/opt/myproject"
Execute command with creates/removes checks:
YAML Format:
- type: raw
description: "Create configuration file"
executable: touch
args: ["/etc/myapp/config.conf"]
creates: true
creates_files: ["/etc/myapp/config.conf"]
JSON Format:
{
"type": "raw",
"description": "Create configuration file",
"executable": "touch",
"args": ["/etc/myapp/config.conf"],
"creates": true,
"creates_files": ["/etc/myapp/config.conf"]
}
TOML Format:
[[tasks]]
type = "raw"
description = "Create configuration file"
executable = "touch"
args = ["/etc/myapp/config.conf"]
creates = true
creates_files = ["/etc/myapp/config.conf"]
Execute command that removes files (idempotent):
YAML Format:
- type: raw
description: "Remove temporary files"
executable: rm
args: ["-f", "/tmp/cache.dat"]
removes: true
removes_files: ["/tmp/cache.dat"]
JSON Format:
{
"type": "raw",
"description": "Remove temporary files",
"executable": "rm",
"args": ["-f", "/tmp/cache.dat"],
"removes": true,
"removes_files": ["/tmp/cache.dat"]
}
TOML Format:
[[tasks]]
type = "raw"
description = "Remove temporary files"
executable = "rm"
args = ["-f", "/tmp/cache.dat"]
removes = true
removes_files = ["/tmp/cache.dat"]
script
Description: Execute local scripts task
Required Fields:
-
creates(bool): Whether the script creates resources -
creates_files(Vec): Files/directories created by the script (for creates check) -
environment(HashMap<String, String>): Environment variables -
force(bool): Force script execution -
params(Vec): Script parameters/arguments -
path(String): Path to the script file -
removes(bool): Whether the script removes resources -
removes_files(Vec): Files/directories removed by the script (for removes check)
Optional Fields:
-
chdir(Option): Working directory for script execution -
description(Option): Optional description of what this task does Human-readable description of the task’s purpose. Used for documentation and can be displayed in logs or reports.
-
timeout(Option): Execution timeout in seconds -
when(Option): Optional condition to determine if the task should run
Examples:
Execute a script:
YAML Format:
- type: script
description: "Run setup script"
path: /usr/local/bin/setup.sh
JSON Format:
{
"type": "script",
"description": "Run setup script",
"path": "/usr/local/bin/setup.sh"
}
TOML Format:
[[tasks]]
type = "script"
description = "Run setup script"
path = "/usr/local/bin/setup.sh"
Execute script with parameters:
YAML Format:
- type: script
description: "Run deployment script with environment"
path: /opt/deploy/deploy.sh
params: ["production", "--verbose"]
chdir: /opt/deploy
JSON Format:
{
"type": "script",
"description": "Run deployment script with environment",
"path": "/opt/deploy/deploy.sh",
"params": ["production", "--verbose"],
"chdir": "/opt/deploy"
}
TOML Format:
[[tasks]]
type = "script"
description = "Run deployment script with environment"
path = "/opt/deploy/deploy.sh"
params = ["production", "--verbose"]
chdir = "/opt/deploy"
Execute script with environment variables:
YAML Format:
- type: script
description: "Run script with environment"
path: /usr/local/bin/configure.sh
environment:
DATABASE_URL: "postgresql://localhost/mydb"
API_KEY: "secret-key"
timeout: 300
JSON Format:
{
"type": "script",
"description": "Run script with environment",
"path": "/usr/local/bin/configure.sh",
"environment": {
"DATABASE_URL": "postgresql://localhost/mydb",
"API_KEY": "secret-key"
},
"timeout": 300
}
TOML Format:
[[tasks]]
type = "script"
description = "Run script with environment"
path = "/usr/local/bin/configure.sh"
environment = { DATABASE_URL = "postgresql://localhost/mydb", API_KEY = "secret-key" }
timeout = 300
Execute script with creates/removes checks:
YAML Format:
- type: script
description: "Run initialization script"
path: /usr/local/bin/init.sh
creates: true
timeout: 600
JSON Format:
{
"type": "script",
"description": "Run initialization script",
"path": "/usr/local/bin/init.sh",
"creates": true,
"timeout": 600
}
TOML Format:
[[tasks]]
type = "script"
description = "Run initialization script"
path = "/usr/local/bin/init.sh"
creates = true
timeout = 600
File Operations
archive
Description: Archive files task
Required Fields:
-
compression(u32): Compression level (1-9) -
extra_opts(Vec): Extra options for archiving -
format(ArchiveFormat): Archive format -
path(String): Archive file path -
sources(Vec): Files/directories to archive -
state(ArchiveState): Archive state
Optional Fields:
-
description(Option): Optional description of what this task does Human-readable description of the task’s purpose. Used for documentation and can be displayed in logs or reports.
-
dest(Option): Destination directory (for extraction) -
when(Option): Optional condition to determine if the task should run
Examples:
Create a tar archive:
YAML Format:
- type: archive
description: "Create backup archive"
path: /tmp/backup.tar
state: present
format: tar
sources:
- /home/user/documents
- /home/user/pictures
JSON Format:
{
"type": "archive",
"description": "Create backup archive",
"path": "/tmp/backup.tar",
"state": "present",
"format": "tar",
"sources": ["/home/user/documents", "/home/user/pictures"]
}
TOML Format:
[[tasks]]
type = "archive"
description = "Create backup archive"
path = "/tmp/backup.tar"
state = "present"
format = "tar"
sources = ["/home/user/documents", "/home/user/pictures"]
Create a compressed tar archive:
YAML Format:
- type: archive
description: "Create compressed backup"
path: /tmp/backup.tar.gz
state: present
format: tgz
sources:
- /var/log
compression: 9
JSON Format:
{
"type": "archive",
"description": "Create compressed backup",
"path": "/tmp/backup.tar.gz",
"state": "present",
"format": "tgz",
"sources": ["/var/log"],
"compression": 9
}
TOML Format:
[[tasks]]
type = "archive"
description = "Create compressed backup"
path = "/tmp/backup.tar.gz"
state = "present"
format = "tgz"
sources = ["/var/log"]
compression = 9
Create a zip archive:
YAML Format:
- type: archive
description: "Create zip archive"
path: /tmp/data.zip
state: present
format: zip
sources:
- /home/user/data
JSON Format:
{
"type": "archive",
"description": "Create zip archive",
"path": "/tmp/data.zip",
"state": "present",
"format": "zip",
"sources": ["/home/user/data"]
}
TOML Format:
[[tasks]]
type = "archive"
description = "Create zip archive"
path = "/tmp/data.zip"
state = "present"
format = "zip"
sources = ["/home/user/data"]
Remove an archive:
YAML Format:
- type: archive
description: "Remove old backup"
path: /tmp/old-backup.tar.gz
state: absent
JSON Format:
{
"type": "archive",
"description": "Remove old backup",
"path": "/tmp/old-backup.tar.gz",
"state": "absent"
}
TOML Format:
[[tasks]]
type = "archive"
description = "Remove old backup"
path = "/tmp/old-backup.tar.gz"
state = "absent"
blockinfile
Description: Insert/update multi-line blocks task
Required Fields:
-
backup(bool): Backup file before modification -
block(String): Block content (multi-line) -
create(bool): Create file if it doesn’t exist -
marker(String): Marker for block boundaries -
path(String): Path to the file -
state(BlockInFileState): Block state
Optional Fields:
-
description(Option): Optional description of what this task does Human-readable description of the task’s purpose. Used for documentation and can be displayed in logs or reports.
-
insertafter(Option): Insert after this line (regex) -
insertbefore(Option): Insert before this line (regex) -
when(Option): Optional condition to determine if the task should run
Examples:
Insert a configuration block:
YAML Format:
- type: blockinfile
description: "Add custom configuration block"
path: /etc/myapp/config.conf
state: present
block: |
# Custom configuration
custom_option = true
custom_value = 42
marker: "# {mark} Custom Config"
JSON Format:
{
"type": "blockinfile",
"description": "Add custom configuration block",
"path": "/etc/myapp/config.conf",
"state": "present",
"block": "# Custom configuration\ncustom_option = true\ncustom_value = 42\n",
"marker": "# {mark} Custom Config"
}
TOML Format:
[[tasks]]
type = "blockinfile"
description = "Add custom configuration block"
path = "/etc/myapp/config.conf"
state = "present"
block = """
# Custom configuration
custom_option = true
custom_value = 42
"""
marker = "# {mark} Custom Config"
Insert block after specific content:
YAML Format:
- type: blockinfile
description: "Add SSL configuration"
path: /etc/httpd/httpd.conf
state: present
block: |
SSLEngine on
SSLCertificateFile /etc/ssl/certs/server.crt
SSLCertificateKeyFile /etc/ssl/private/server.key
insertafter: "^# LoadModule ssl_module"
marker: "# {mark} SSL Config"
JSON Format:
{
"type": "blockinfile",
"description": "Add SSL configuration",
"path": "/etc/httpd/httpd.conf",
"state": "present",
"block": "SSLEngine on\nSSLCertificateFile /etc/ssl/certs/server.crt\nSSLCertificateKeyFile /etc/ssl/private/server.key\n",
"insertafter": "^# LoadModule ssl_module",
"marker": "# {mark} SSL Config"
}
TOML Format:
[[tasks]]
type = "blockinfile"
description = "Add SSL configuration"
path = "/etc/httpd/httpd.conf"
state = "present"
block = """
SSLEngine on
SSLCertificateFile /etc/ssl/certs/server.crt
SSLCertificateKeyFile /etc/ssl/private/server.key
"""
insertafter = "^# LoadModule ssl_module"
marker = "# {mark} SSL Config"
Insert block with backup:
YAML Format:
- type: blockinfile
description: "Add firewall rules with backup"
path: /etc/iptables/rules.v4
state: present
block: |
-A INPUT -p tcp --dport 80 -j ACCEPT
-A INPUT -p tcp --dport 443 -j ACCEPT
marker: "# {mark} Web Rules"
backup: true
JSON Format:
{
"type": "blockinfile",
"description": "Add firewall rules with backup",
"path": "/etc/iptables/rules.v4",
"state": "present",
"block": "-A INPUT -p tcp --dport 80 -j ACCEPT\n-A INPUT -p tcp --dport 443 -j ACCEPT\n",
"marker": "# {mark} Web Rules",
"backup": true
}
TOML Format:
[[tasks]]
type = "blockinfile"
description = "Add firewall rules with backup"
path = "/etc/iptables/rules.v4"
state = "present"
block = """
-A INPUT -p tcp --dport 80 -j ACCEPT
-A INPUT -p tcp --dport 443 -j ACCEPT
"""
marker = "# {mark} Web Rules"
backup = true
Remove a configuration block:
YAML Format:
- type: blockinfile
description: "Remove old configuration"
path: /etc/myapp/config.conf
state: absent
marker: "# {mark} Old Config"
JSON Format:
{
"type": "blockinfile",
"description": "Remove old configuration",
"path": "/etc/myapp/config.conf",
"state": "absent",
"marker": "# {mark} Old Config"
}
TOML Format:
[[tasks]]
type = "blockinfile"
description = "Remove old configuration"
path = "/etc/myapp/config.conf"
state = "absent"
marker = "# {mark} Old Config"
copy
Description: Copy files task
Required Fields:
-
backup(bool): Whether to create backup of destination -
dest(String): Destination file path -
follow(bool): Whether to follow symlinks -
force(bool): Force copy even if destination exists -
mode(bool): Whether to preserve permissions -
owner(bool): Whether to preserve ownership -
src(String): Source file path -
state(CopyState): Copy state -
timestamp(bool): Whether to preserve timestamps
Optional Fields:
-
description(Option): Optional description of what this task does Human-readable description of the task’s purpose. Used for documentation and can be displayed in logs or reports.
-
when(Option): Optional condition to determine if the task should run
Examples:
Copy a file:
YAML Format:
- type: copy
description: "Copy configuration file"
src: /etc/nginx/nginx.conf.template
dest: /etc/nginx/nginx.conf
state: present
JSON Format:
{
"type": "copy",
"description": "Copy configuration file",
"src": "/etc/nginx/nginx.conf.template",
"dest": "/etc/nginx/nginx.conf",
"state": "present"
}
TOML Format:
[[tasks]]
type = "copy"
description = "Copy configuration file"
src = "/etc/nginx/nginx.conf.template"
dest = "/etc/nginx/nginx.conf"
state = "present"
Copy with backup:
YAML Format:
- type: copy
description: "Copy config with backup"
src: /tmp/new-config.conf
dest: /etc/myapp/config.conf
state: present
backup: true
JSON Format:
{
"type": "copy",
"description": "Copy config with backup",
"src": "/tmp/new-config.conf",
"dest": "/etc/myapp/config.conf",
"state": "present",
"backup": true
}
TOML Format:
[[tasks]]
type = "copy"
description = "Copy config with backup"
src = "/tmp/new-config.conf"
dest = "/etc/myapp/config.conf"
state = "present"
backup = true
Remove a copied file:
YAML Format:
- type: copy
description: "Remove copied configuration"
src: /etc/nginx/nginx.conf.template
dest: /etc/nginx/nginx.conf
state: absent
JSON Format:
{
"type": "copy",
"description": "Remove copied configuration",
"src": "/etc/nginx/nginx.conf.template",
"dest": "/etc/nginx/nginx.conf",
"state": "absent"
}
TOML Format:
[[tasks]]
type = "copy"
description = "Remove copied configuration"
src = "/etc/nginx/nginx.conf.template"
dest = "/etc/nginx/nginx.conf"
state = "absent"
directory
Description: Directory management task
Required Fields:
-
parents(bool): Whether to create parent directories -
path(String): Directory path -
recurse(bool): Whether to recursively set permissions -
state(DirectoryState): Directory state
Optional Fields:
-
description(Option): Optional description of what this task does Human-readable description of the task’s purpose. Used for documentation and can be displayed in logs or reports.
-
group(Option): Directory group -
mode(Option): Directory permissions (octal string like “0755”) -
owner(Option): Directory owner -
when(Option): Optional condition to determine if the task should run
Examples:
Create a directory:
YAML Format:
- type: directory
description: "Create application directory"
path: /opt/myapp
state: present
mode: "0755"
owner: root
group: root
JSON Format:
{
"type": "directory",
"description": "Create application directory",
"path": "/opt/myapp",
"state": "present",
"mode": "0755",
"owner": "root",
"group": "root"
}
TOML Format:
[[tasks]]
type = "directory"
description = "Create application directory"
path = "/opt/myapp"
state = "present"
mode = "0755"
owner = "root"
group = "root"
Create directory with parent directories:
YAML Format:
- type: directory
description: "Create nested directory structure"
path: /var/log/myapp/subdir
state: present
mode: "0750"
owner: myapp
group: myapp
parents: true
JSON Format:
{
"type": "directory",
"description": "Create nested directory structure",
"path": "/var/log/myapp/subdir",
"state": "present",
"mode": "0750",
"owner": "myapp",
"group": "myapp",
"parents": true
}
TOML Format:
[[tasks]]
type = "directory"
description = "Create nested directory structure"
path = "/var/log/myapp/subdir"
state = "present"
mode = "0750"
owner = "myapp"
group = "myapp"
parents = true
Remove a directory:
YAML Format:
- type: directory
description: "Remove temporary directory"
path: /tmp/old-data
state: absent
JSON Format:
{
"type": "directory",
"description": "Remove temporary directory",
"path": "/tmp/old-data",
"state": "absent"
}
TOML Format:
[[tasks]]
type = "directory"
description = "Remove temporary directory"
path = "/tmp/old-data"
state = "absent"
fetch
Description: Fetch files from remote hosts task
Required Fields:
-
dest(String): Destination file path -
follow_redirects(bool): Follow redirects -
force(bool): Force download even if file exists -
headers(HashMap<String, String>): HTTP headers -
state(FetchState): Fetch state -
timeout(u64): Timeout in seconds -
url(String): Source URL -
validate_certs(bool): Validate SSL certificates
Optional Fields:
-
description(Option): Optional description of what this task does Human-readable description of the task’s purpose. Used for documentation and can be displayed in logs or reports.
-
password(Option): Password for basic auth -
username(Option): Username for basic auth -
when(Option): Optional condition to determine if the task should run
Examples:
Download a file:
YAML Format:
- type: fetch
description: "Download configuration file"
url: http://example.com/config.yml
dest: /etc/myapp/config.yml
state: present
JSON Format:
{
"type": "fetch",
"description": "Download configuration file",
"url": "http://example.com/config.yml",
"dest": "/etc/myapp/config.yml",
"state": "present"
}
TOML Format:
[[tasks]]
type = "fetch"
description = "Download configuration file"
url = "http://example.com/config.yml"
dest = "/etc/myapp/config.yml"
state = "present"
Download with authentication:
YAML Format:
- type: fetch
description: "Download private file"
url: https://private.example.com/file.txt
dest: /tmp/private.txt
state: present
username: myuser
password: mypassword
JSON Format:
{
"type": "fetch",
"description": "Download private file",
"url": "https://private.example.com/file.txt",
"dest": "/tmp/private.txt",
"state": "present",
"username": "myuser",
"password": "mypassword"
}
TOML Format:
[[tasks]]
type = "fetch"
description = "Download private file"
url = "https://private.example.com/file.txt"
dest = "/tmp/private.txt"
state = "present"
username = "myuser"
password = "mypassword"
Download with custom headers:
YAML Format:
- type: fetch
description: "Download with custom headers"
url: https://api.example.com/data.json
dest: /tmp/data.json
state: present
headers:
Authorization: "Bearer token123"
X-API-Key: "apikey456"
JSON Format:
{
"type": "fetch",
"description": "Download with custom headers",
"url": "https://api.example.com/data.json",
"dest": "/tmp/data.json",
"state": "present",
"headers": {
"Authorization": "Bearer token123",
"X-API-Key": "apikey456"
}
}
TOML Format:
[[tasks]]
type = "fetch"
description = "Download with custom headers"
url = "https://api.example.com/data.json"
dest = "/tmp/data.json"
state = "present"
headers = { Authorization = "Bearer token123", "X-API-Key" = "apikey456" }
Force download:
YAML Format:
- type: fetch
description: "Force download latest version"
url: https://example.com/latest.tar.gz
dest: /tmp/latest.tar.gz
state: present
force: true
JSON Format:
{
"type": "fetch",
"description": "Force download latest version",
"url": "https://example.com/latest.tar.gz",
"dest": "/tmp/latest.tar.gz",
"state": "present",
"force": true
}
TOML Format:
[[tasks]]
type = "fetch"
description = "Force download latest version"
url = "https://example.com/latest.tar.gz"
dest = "/tmp/latest.tar.gz"
state = "present"
force = true
file
Description: File operation task
Manages files and directories - create, modify, or remove files with content,
permissions, and ownership. Similar to Ansible’s file module.
Required Fields:
-
path(String): Path to the file or directoryAbsolute path to the file or directory to manage. Parent directories will not be created automatically.
-
state(FileState): File state (present, absent)present: Ensure the file exists with specified propertiesabsent: Ensure the file does not exist
Optional Fields:
-
content(Option): File content (for present state) Content to write to the file when state is
present. Mutually exclusive withsource. -
description(Option): Optional description of what this task does Human-readable description of the task’s purpose. Used for documentation and can be displayed in logs or reports.
-
group(Option): File group name Group name for the file. Only applied when creating or modifying files.
-
mode(Option): File permissions (octal string like “0644”) File permissions in octal notation (e.g., “0644”, “0755”). Only applied when creating or modifying files.
-
owner(Option): File owner username Username of the file owner. Only applied when creating or modifying files.
-
source(Option): Source file to copy from (alternative to content) Path to a source file to copy content from when state is
present. Mutually exclusive withcontent. -
when(Option): Optional condition to determine if the task should run
Examples:
Create a file with content:
YAML Format:
- type: file
description: "Create nginx configuration file"
path: /etc/nginx/sites-available/default
state: present
content: |
server {
listen 80;
root /var/www/html;
index index.html index.htm;
location / {
try_files $uri $uri/ =404;
}
}
mode: "0644"
owner: root
group: root
JSON Format:
{
"type": "file",
"description": "Create nginx configuration file",
"path": "/etc/nginx/sites-available/default",
"state": "present",
"content": "server {\n listen 80;\n root /var/www/html;\n index index.html index.htm;\n\n location / {\n try_files $uri $uri/ =404;\n }\n}",
"mode": "0644",
"owner": "root",
"group": "root"
}
TOML Format:
[[tasks]]
type = "file"
description = "Create nginx configuration file"
path = "/etc/nginx/sites-available/default"
state = "present"
content = """
server {
listen 80;
root /var/www/html;
index index.html index.htm;
location / {
try_files $uri $uri/ =404;
}
}
"""
mode = "0644"
owner = "root"
group = "root"
Register file creation:
YAML Format:
- type: file
description: "Create marker file"
path: /tmp/driftless.marker
state: present
register: marker_file
JSON Format:
{
"type": "file",
"description": "Create marker file",
"path": "/tmp/driftless.marker",
"state": "present",
"register": "marker_file"
}
TOML Format:
[[tasks]]
type = "file"
description = "Create marker file"
path = "/tmp/driftless.marker"
state = "present"
register = "marker_file"
lineinfile
Description: Ensure line in file task
Required Fields:
-
backup(bool): Backup file before modification -
create(bool): Create file if it doesn’t exist -
line(String): The line content -
path(String): Path to the file -
state(LineInFileState): Line state
Optional Fields:
-
description(Option): Optional description of what this task does Human-readable description of the task’s purpose. Used for documentation and can be displayed in logs or reports.
-
insertafter(Option): Insert after this line (regex) -
insertbefore(Option): Insert before this line (regex) -
regexp(Option): Regular expression to match existing line -
when(Option): Optional condition to determine if the task should run
Examples:
Add a line to a file:
YAML Format:
- type: lineinfile
description: "Add localhost entry to hosts file"
path: /etc/hosts
state: present
line: "127.0.0.1 localhost"
JSON Format:
{
"type": "lineinfile",
"description": "Add localhost entry to hosts file",
"path": "/etc/hosts",
"state": "present",
"line": "127.0.0.1 localhost"
}
TOML Format:
[[tasks]]
type = "lineinfile"
description = "Add localhost entry to hosts file"
path = "/etc/hosts"
state = "present"
line = "127.0.0.1 localhost"
Replace a line using regex:
YAML Format:
- type: lineinfile
description: "Update SSH port configuration"
path: /etc/ssh/sshd_config
state: present
regexp: "^#?Port .*"
line: "Port 22"
JSON Format:
{
"type": "lineinfile",
"description": "Update SSH port configuration",
"path": "/etc/ssh/sshd_config",
"state": "present",
"regexp": "^#?Port .*",
"line": "Port 22"
}
TOML Format:
[[tasks]]
type = "lineinfile"
description = "Update SSH port configuration"
path = "/etc/ssh/sshd_config"
state = "present"
regexp = "^#?Port .*"
line = "Port 22"
Insert line after a pattern:
YAML Format:
- type: lineinfile
description: "Add include directive after main config"
path: /etc/nginx/nginx.conf
state: present
line: "include /etc/nginx/sites-enabled/*;"
insertafter: "http \{"
JSON Format:
{
"type": "lineinfile",
"description": "Add include directive after main config",
"path": "/etc/nginx/nginx.conf",
"state": "present",
"line": "include /etc/nginx/sites-enabled/*;",
"insertafter": "http \{"
}
TOML Format:
[[tasks]]
type = "lineinfile"
description = "Add include directive after main config"
path = "/etc/nginx/nginx.conf"
state = "present"
line = "include /etc/nginx/sites-enabled/*;"
insertafter = "http \{"
replace
Description: Replace text in files task
Required Fields:
-
backup(bool): Backup file before modification -
path(String): Path to the file -
replace(String): Replacement string -
replace_all(bool): Replace all occurrences -
state(ReplaceState): Replace state
Optional Fields:
-
before(Option): String to match (alternative to regexp) -
description(Option): Optional description of what this task does Human-readable description of the task’s purpose. Used for documentation and can be displayed in logs or reports.
-
regexp(Option): Regular expression to match -
when(Option): Optional condition to determine if the task should run
Examples:
Replace text using regex:
YAML Format:
- type: replace
description: "Update database host"
path: /etc/myapp/config.ini
state: present
regexp: '^db_host\s*=\s*.*$'
replace: 'db_host = newdb.example.com'
JSON Format:
{
"type": "replace",
"description": "Update database host",
"path": "/etc/myapp/config.ini",
"state": "present",
"regexp": "^db_host\\s*=\\s*.*$",
"replace": "db_host = newdb.example.com"
}
TOML Format:
[[tasks]]
type = "replace"
description = "Update database host"
path = "/etc/myapp/config.ini"
state = "present"
regexp = "^db_host\\s*=\\s*.*$"
replace = "db_host = newdb.example.com"
Replace string literal:
YAML Format:
- type: replace
description: "Update version number"
path: /opt/myapp/version.txt
state: present
before: 'version = "1.0.0"'
replace: 'version = "1.1.0"'
JSON Format:
{
"type": "replace",
"description": "Update version number",
"path": "/opt/myapp/version.txt",
"state": "present",
"before": "version = \"1.0.0\"",
"replace": "version = \"1.1.0\""
}
TOML Format:
[[tasks]]
type = "replace"
description = "Update version number"
path = "/opt/myapp/version.txt"
state = "present"
before = 'version = "1.0.0"'
replace = 'version = "1.1.0"'
Replace with backup:
YAML Format:
- type: replace
description: "Update configuration with backup"
path: /etc/httpd/httpd.conf
state: present
regexp: '^Listen 80$'
replace: 'Listen 8080'
backup: true
JSON Format:
{
"type": "replace",
"description": "Update configuration with backup",
"path": "/etc/httpd/httpd.conf",
"state": "present",
"regexp": "^Listen 80$",
"replace": "Listen 8080",
"backup": true
}
TOML Format:
[[tasks]]
type = "replace"
description = "Update configuration with backup"
path = "/etc/httpd/httpd.conf"
state = "present"
regexp = "^Listen 80$"
replace = "Listen 8080"
backup = true
Replace all occurrences:
YAML Format:
- type: replace
description: "Update all IP addresses"
path: /etc/hosts
state: present
regexp: '192\.168\.1\.\d+'
replace: '10.0.0.100'
replace_all: true
JSON Format:
{
"type": "replace",
"description": "Update all IP addresses",
"path": "/etc/hosts",
"state": "present",
"regexp": "192\\.168\\.1\\.\\d+",
"replace": "10.0.0.100",
"replace_all": true
}
TOML Format:
[[tasks]]
type = "replace"
description = "Update all IP addresses"
path = "/etc/hosts"
state = "present"
regexp = "192\\.168\\.1\\.\\d+"
replace = "10.0.0.100"
replace_all = true
stat
Description: File/directory statistics task
Required Fields:
-
checksum(bool): Get checksum of file -
checksum_algorithm(ChecksumAlgorithm): Checksum algorithm -
follow(bool): Whether to follow symlinks -
path(String): Path to check
Optional Fields:
-
description(Option): Optional description of what this task does Human-readable description of the task’s purpose. Used for documentation and can be displayed in logs or reports.
-
register(Option): Optional variable name to register the task result in -
when(Option): Optional condition to determine if the task should run
Registered Outputs:
checksum(String): The file checksum (ifchecksumis true)exists(bool): Whether the file or directory existsgid(u32): The group ID of the owneris_dir(bool): Whether the path is a directoryis_file(bool): Whether the path is a filemode(u32): The file mode (permissions)modified(u64): Last modification time (epoch seconds)size(u64): The size of the file in bytesuid(u32): The user ID of the owner
Examples:
Get file statistics:
YAML Format:
- type: stat
description: "Get file statistics"
path: /etc/passwd
JSON Format:
{
"type": "stat",
"description": "Get file statistics",
"path": "/etc/passwd"
}
TOML Format:
[[tasks]]
type = "stat"
description = "Get file statistics"
path = "/etc/passwd"
Get file checksum:
YAML Format:
- type: stat
description: "Get file checksum"
path: /etc/hosts
checksum: true
checksum_algorithm: sha256
JSON Format:
{
"type": "stat",
"description": "Get file checksum",
"path": "/etc/hosts",
"checksum": true,
"checksum_algorithm": "sha256"
}
TOML Format:
[[tasks]]
type = "stat"
description = "Get file checksum"
path = "/etc/hosts"
checksum = true
checksum_algorithm = "sha256"
Follow symlinks:
YAML Format:
- type: stat
description: "Follow symlink for statistics"
path: /var/log/syslog
follow: true
JSON Format:
{
"type": "stat",
"description": "Follow symlink for statistics",
"path": "/var/log/syslog",
"follow": true
}
TOML Format:
[[tasks]]
type = "stat"
description = "Follow symlink for statistics"
path = "/var/log/syslog"
follow = true
Register file status:
YAML Format:
- type: stat
description: "Check if nginx config exists"
path: /etc/nginx/nginx.conf
register: nginx_conf
- type: debug
msg: "Nginx config exists: {{ nginx_conf.exists }}"
when: "{{ nginx_conf.exists }}"
JSON Format:
[
{
"type": "stat",
"description": "Check if nginx config exists",
"path": "/etc/nginx/nginx.conf",
"register": "nginx_conf"
},
{
"type": "debug",
"msg": "Nginx config exists: {{ nginx_conf.exists }}",
"when": "{{ nginx_conf.exists }}"
}
]
TOML Format:
[[tasks]]
type = "stat"
description = "Check if nginx config exists"
path = "/etc/nginx/nginx.conf"
register = "nginx_conf"
[[tasks]]
type = "debug"
msg = "Nginx config exists: {{ nginx_conf.exists }}"
when = "{{ nginx_conf.exists }}"
Get directory statistics:
YAML Format:
- type: stat
description: "Get directory statistics"
path: /home/user
JSON Format:
{
"type": "stat",
"description": "Get directory statistics",
"path": "/home/user"
}
TOML Format:
[[tasks]]
type = "stat"
description = "Get directory statistics"
path = "/home/user"
template
Description: Template rendering task
Required Fields:
-
backup(bool): Backup destination before templating -
dest(String): Destination file -
force(bool): Force template rendering -
src(String): Source template file -
state(TemplateState): Template state -
vars(HashMap<String, Value>): Variables for template rendering
Optional Fields:
-
description(Option): Optional description of what this task does Human-readable description of the task’s purpose. Used for documentation and can be displayed in logs or reports.
-
template_dir(Option): Template directory for includes/imports Directory containing templates that can be included or imported. If not specified, includes/imports will not work.
-
when(Option): Optional condition to determine if the task should run
Examples:
Render a template:
YAML Format:
- type: template
description: "Render nginx configuration"
src: /templates/nginx.conf.j2
dest: /etc/nginx/sites-available/default
state: present
vars:
server_name: example.com
port: 80
root_dir: /var/www/html
JSON Format:
{
"type": "template",
"description": "Render nginx configuration",
"src": "/templates/nginx.conf.j2",
"dest": "/etc/nginx/sites-available/default",
"state": "present",
"vars": {
"server_name": "example.com",
"port": 80,
"root_dir": "/var/www/html"
}
}
TOML Format:
[[tasks]]
type = "template"
description = "Render nginx configuration"
src = "/templates/nginx.conf.j2"
dest = "/etc/nginx/sites-available/default"
state = "present"
[tasks.vars]
server_name = "example.com"
port = 80
root_dir = "/var/www/html"
Render template with backup:
YAML Format:
- type: template
description: "Update config with backup"
src: /templates/app.conf.j2
dest: /etc/myapp/config.conf
state: present
backup: true
vars:
database_host: localhost
database_port: 5432
JSON Format:
{
"type": "template",
"description": "Update config with backup",
"src": "/templates/app.conf.j2",
"dest": "/etc/myapp/config.conf",
"state": "present",
"backup": true,
"vars": {
"database_host": "localhost",
"database_port": 5432
}
}
TOML Format:
[[tasks]]
type = "template"
description = "Update config with backup"
src = "/templates/app.conf.j2"
dest = "/etc/myapp/config.conf"
state = "present"
backup = true
[tasks.vars]
database_host = "localhost"
database_port = 5432
Remove rendered template:
YAML Format:
- type: template
description: "Remove rendered configuration"
src: /templates/old.conf.j2
dest: /etc/oldapp/config.conf
state: absent
JSON Format:
{
"type": "template",
"description": "Remove rendered configuration",
"src": "/templates/old.conf.j2",
"dest": "/etc/oldapp/config.conf",
"state": "absent"
}
TOML Format:
[[tasks]]
type = "template"
description = "Remove rendered configuration"
src = "/templates/old.conf.j2"
dest = "/etc/oldapp/config.conf"
state = "absent"
unarchive
Description: Unarchive files task
Required Fields:
-
creates(bool): Whether to create destination directory -
dest(String): Destination directory -
extra_opts(Vec): Extra options for extraction -
follow_redirects(bool): Follow redirects for URL downloads -
headers(HashMap<String, String>): HTTP headers for URL downloads -
keep_original(bool): Whether to keep the archive after extraction -
list_files(Vec): List of files to extract (empty = all) -
src(String): Source archive file (local path) or URL -
state(UnarchiveState): Unarchive state -
timeout(u64): Timeout for URL downloads -
validate_certs(bool): Validate SSL certificates for URL downloads
Optional Fields:
-
description(Option): Optional description of what this task does Human-readable description of the task’s purpose. Used for documentation and can be displayed in logs or reports.
-
format(Option): Archive format (auto-detect if not specified) -
password(Option): Password for basic auth for URL downloads -
username(Option): Username for basic auth for URL downloads -
when(Option): Optional condition to determine if the task should run
Examples:
Extract a tar archive:
YAML Format:
- type: unarchive
description: "Extract application archive"
src: /tmp/myapp.tar.gz
dest: /opt/myapp
state: present
JSON Format:
{
"type": "unarchive",
"description": "Extract application archive",
"src": "/tmp/myapp.tar.gz",
"dest": "/opt/myapp",
"state": "present"
}
TOML Format:
[[tasks]]
type = "unarchive"
description = "Extract application archive"
src = "/tmp/myapp.tar.gz"
dest = "/opt/myapp"
state = "present"
Extract from URL:
YAML Format:
- type: unarchive
description: "Download and extract software"
src: https://example.com/software.tar.gz
dest: /opt/software
state: present
creates: true
JSON Format:
{
"type": "unarchive",
"description": "Download and extract software",
"src": "https://example.com/software.tar.gz",
"dest": "/opt/software",
"state": "present",
"creates": true
}
TOML Format:
[[tasks]]
type = "unarchive"
description = "Download and extract software"
src = "https://example.com/software.tar.gz"
dest = "/opt/software"
state = "present"
creates = true
Extract specific files:
YAML Format:
- type: unarchive
description: "Extract configuration files"
src: /tmp/configs.tar.gz
dest: /etc/myapp
state: present
list_files:
- config.yml
- settings.json
JSON Format:
{
"type": "unarchive",
"description": "Extract configuration files",
"src": "/tmp/configs.tar.gz",
"dest": "/etc/myapp",
"state": "present",
"list_files": ["config.yml", "settings.json"]
}
TOML Format:
[[tasks]]
type = "unarchive"
description = "Extract configuration files"
src = "/tmp/configs.tar.gz"
dest = "/etc/myapp"
state = "present"
list_files = ["config.yml", "settings.json"]
Extract zip archive:
YAML Format:
- type: unarchive
description: "Extract zip archive"
src: /tmp/data.zip
dest: /var/data
state: present
format: zip
JSON Format:
{
"type": "unarchive",
"description": "Extract zip archive",
"src": "/tmp/data.zip",
"dest": "/var/data",
"state": "present",
"format": "zip"
}
TOML Format:
[[tasks]]
type = "unarchive"
description = "Extract zip archive"
src = "/tmp/data.zip"
dest = "/var/data"
state = "present"
format = "zip"
Monitoring & Logging
journald
Description: Journald configuration task
Manages systemd journal configuration. Can modify the main /etc/systemd/journald.conf file or create drop-in configuration files in /etc/systemd/journald.conf.d/. Supports all journald configuration options like storage settings, size limits, forwarding options, and compression settings.
Required Fields:
-
config(HashMap<String, String>): Journald configuration optionsKey-value pairs of journald configuration options. Required when state is present. Common options include:
- Storage: volatile|persistent|auto|none
- SystemMaxUse: Maximum disk space to use
- SystemKeepFree: Disk space to keep free
- SystemMaxFileSize: Maximum size of individual journal files
- MaxRetentionSec: Maximum time to retain journal entries
- ForwardToSyslog: Forward to syslog
- Compress: Enable compression
-
state(JournaldState): Configuration state (present, absent)present: Ensure the journald configuration existsabsent: Ensure the journald configuration does not exist
Optional Fields:
-
description(Option): Optional description of what this task does Human-readable description of the task’s purpose. Used for documentation and can be displayed in logs or reports.
-
name(Option): Configuration name (for drop-in configs) Name of the drop-in configuration file to create in /etc/systemd/journald.conf.d/. If not specified, modifies the main /etc/systemd/journald.conf file. This becomes the filename (e.g., “storage” creates /etc/systemd/journald.conf.d/storage.conf).
-
when(Option): Optional condition to determine if the task should run
Examples:
Configure journald storage and rotation:
YAML Format:
- type: journald
description: "Configure systemd journal settings"
config:
Storage: persistent
SystemMaxUse: 100M
SystemKeepFree: 500M
SystemMaxFileSize: 10M
MaxRetentionSec: 1week
state: present
JSON Format:
{
"type": "journald",
"description": "Configure systemd journal settings",
"config": {
"Storage": "persistent",
"SystemMaxUse": "100M",
"SystemKeepFree": "500M",
"SystemMaxFileSize": "10M",
"MaxRetentionSec": "1week"
},
"state": "present"
}
TOML Format:
[[tasks]]
type = "journald"
description = "Configure systemd journal settings"
[tasks.config]
Storage = "persistent"
SystemMaxUse = "100M"
SystemKeepFree = "500M"
SystemMaxFileSize = "10M"
MaxRetentionSec = "1week"
state = "present"
logrotate
Description: Logrotate configuration task
Manages logrotate configuration files in /etc/logrotate.d/. Creates or removes logrotate configuration snippets for log rotation management.
Required Fields:
-
name(String): Configuration nameName of the logrotate configuration file to create in /etc/logrotate.d/. This becomes the filename (e.g., “nginx” creates /etc/logrotate.d/nginx).
-
options(Vec): Logrotate options List of logrotate configuration options. Common options include:
- “daily”, “weekly”, “monthly”, “yearly”
- “rotate N” (keep N rotations)
- “compress”, “delaycompress”
- “missingok”, “notifempty”
- “create MODE OWNER GROUP”
-
state(LogrotateState): Configuration state (present, absent)present: Ensure the logrotate configuration existsabsent: Ensure the logrotate configuration does not exist
Optional Fields:
-
description(Option): Optional description of what this task does Human-readable description of the task’s purpose. Used for documentation and can be displayed in logs or reports.
-
path(Option): Log file path(s) Path or glob pattern for log files to rotate. Required when state is present. Examples: “/var/log/app/*.log”, “/var/log/nginx/access.log”
-
postrotate(Option): Post-rotate script Shell commands to execute after log rotation. Commonly used to reload services after log rotation.
-
when(Option): Optional condition to determine if the task should run
Examples:
Create a logrotate configuration for nginx:
YAML Format:
- type: logrotate
description: "Configure nginx log rotation"
name: nginx
path: /var/log/nginx/*.log
options:
- weekly
- rotate 52
- compress
- delaycompress
- missingok
- notifempty
- create 644 www-data www-data
postrotate: |
systemctl reload nginx
state: present
JSON Format:
{
"type": "logrotate",
"description": "Configure nginx log rotation",
"name": "nginx",
"path": "/var/log/nginx/*.log",
"options": [
"weekly",
"rotate 52",
"compress",
"delaycompress",
"missingok",
"notifempty",
"create 644 www-data www-data"
],
"postrotate": "systemctl reload nginx\n",
"state": "present"
}
TOML Format:
[[tasks]]
type = "logrotate"
description = "Configure nginx log rotation"
name = "nginx"
path = "/var/log/nginx/*.log"
options = [
"weekly",
"rotate 52",
"compress",
"delaycompress",
"missingok",
"notifempty",
"create 644 www-data www-data"
]
postrotate = """
systemctl reload nginx
"""
state = "present"
rsyslog
Description: Rsyslog configuration task
Manages rsyslog configuration files in /etc/rsyslog.d/. Creates or removes rsyslog configuration snippets for log processing and forwarding.
Required Fields:
-
name(String): Configuration nameName of the rsyslog configuration file to create in /etc/rsyslog.d/. This becomes the filename (e.g., “remote-logging” creates /etc/rsyslog.d/remote-logging.conf).
-
state(RsyslogState): Configuration state (present, absent)present: Ensure the rsyslog configuration existsabsent: Ensure the rsyslog configuration does not exist
Optional Fields:
-
config(Option): Rsyslog configuration content The rsyslog configuration directives. Required when state is present. Examples include log forwarding rules, custom log files, filters, etc.
-
description(Option): Optional description of what this task does Human-readable description of the task’s purpose. Used for documentation and can be displayed in logs or reports.
-
when(Option): Optional condition to determine if the task should run
Examples:
Create an rsyslog configuration for remote logging:
YAML Format:
- type: rsyslog
description: "Configure remote log forwarding"
name: remote-logging
config: |
# Forward all logs to remote server
*.* @@logserver.example.com:514
# Forward auth logs with TCP
auth.* @@logserver.example.com:514
state: present
JSON Format:
{
"type": "rsyslog",
"description": "Configure remote log forwarding",
"name": "remote-logging",
"config": "*.* @@logserver.example.com:514\n\nauth.* @@logserver.example.com:514\n",
"state": "present"
}
TOML Format:
[[tasks]]
type = "rsyslog"
description = "Configure remote log forwarding"
name = "remote-logging"
config = """
*.* @@logserver.example.com:514
auth.* @@logserver.example.com:514
"""
state = "present"
Network Operations
geturl
Description: Download files from HTTP/HTTPS/FTP task
Downloads files from web servers or FTP servers. Supports authentication,
checksum validation, and file permission management. Similar to Ansible’s get_url module.
Required Fields:
-
backup(bool): Backup destination before download -
dest(String): Destination file path -
follow_redirects(bool): Follow redirects -
force(bool): Force download even if file exists -
headers(HashMap<String, String>): HTTP headers -
state(GetUrlState): Get URL state -
timeout(u64): Timeout in seconds -
url(String): Source URL -
validate_certs(bool): Validate SSL certificates
Optional Fields:
-
checksum(Option): Checksum validation -
description(Option): Optional description of what this task does Human-readable description of the task’s purpose. Used for documentation and can be displayed in logs or reports.
-
group(Option): File group -
mode(Option): File permissions (octal string like “0644”) -
owner(Option): File owner -
password(Option): Password for basic auth -
username(Option): Username for basic auth -
when(Option): Optional condition to determine if the task should run
Examples:
Download a file:
YAML Format:
- type: get_url
description: "Download configuration file"
url: https://example.com/config.yml
dest: /etc/myapp/config.yml
state: present
JSON Format:
{
"type": "get_url",
"description": "Download configuration file",
"url": "https://example.com/config.yml",
"dest": "/etc/myapp/config.yml",
"state": "present"
}
TOML Format:
[[tasks]]
type = "get_url"
description = "Download configuration file"
url = "https://example.com/config.yml"
dest = "/etc/myapp/config.yml"
state = "present"
Download with checksum validation:
YAML Format:
- type: get_url
description: "Download software with checksum validation"
url: https://example.com/software.tar.gz
dest: /tmp/software.tar.gz
state: present
checksum: sha256:abc123def456...
JSON Format:
{
"type": "get_url",
"description": "Download software with checksum validation",
"url": "https://example.com/software.tar.gz",
"dest": "/tmp/software.tar.gz",
"state": "present",
"checksum": "sha256:abc123def456..."
}
TOML Format:
[[tasks]]
type = "get_url"
description = "Download software with checksum validation"
url = "https://example.com/software.tar.gz"
dest = "/tmp/software.tar.gz"
state = "present"
checksum = "sha256:abc123def456..."
Download with authentication:
YAML Format:
- type: get_url
description: "Download private file"
url: https://private.example.com/file.txt
dest: /tmp/private.txt
state: present
username: myuser
password: mypassword
JSON Format:
{
"type": "get_url",
"description": "Download private file",
"url": "https://private.example.com/file.txt",
"dest": "/tmp/private.txt",
"state": "present",
"username": "myuser",
"password": "mypassword"
}
TOML Format:
[[tasks]]
type = "get_url"
description = "Download private file"
url = "https://private.example.com/file.txt"
dest = "/tmp/private.txt"
state = "present"
username = "myuser"
password = "mypassword"
Download and set permissions:
YAML Format:
- type: get_url
description: "Download script with proper permissions"
url: https://example.com/script.sh
dest: /usr/local/bin/myscript.sh
state: present
mode: "0755"
owner: root
group: root
JSON Format:
{
"type": "get_url",
"description": "Download script with proper permissions",
"url": "https://example.com/script.sh",
"dest": "/usr/local/bin/myscript.sh",
"state": "present",
"mode": "0755",
"owner": "root",
"group": "root"
}
TOML Format:
[[tasks]]
type = "get_url"
description = "Download script with proper permissions"
url = "https://example.com/script.sh"
dest = "/usr/local/bin/myscript.sh"
state = "present"
mode = "0755"
owner = "root"
group = "root"
uri
Description: Interact with web services task
Makes HTTP requests to web services and APIs. Validates responses and can
return content. Similar to Ansible’s uri module.
Required Fields:
-
follow_redirects(bool): Follow redirects -
force(bool): Force execution even if idempotent -
headers(HashMap<String, String>): HTTP headers -
method(HttpMethod): HTTP method -
return_content(bool): Return content in result -
state(UriState): URI state -
status_code(Vec): Expected status codes -
timeout(u64): Timeout in seconds -
url(String): Target URL -
validate_certs(bool): Validate SSL certificates
Optional Fields:
-
body(Option): Request body -
content_type(Option): Content type for request body -
description(Option): Optional description of what this task does Human-readable description of the task’s purpose. Used for documentation and can be displayed in logs or reports.
-
password(Option): Password for basic auth -
register(Option): Optional variable name to register the task result in -
username(Option): Username for basic auth -
when(Option): Optional condition to determine if the task should run
Registered Outputs:
changed(bool): Whether the request was successfully madecontent(String): The body of the response (ifreturn_contentis true)status(u16): The HTTP status code of the response
Examples:
Simple GET request:
YAML Format:
- type: uri
description: "Check API health endpoint"
url: https://api.example.com/health
method: GET
status_code: 200
return_content: true
JSON Format:
{
"type": "uri",
"description": "Check API health endpoint",
"url": "https://api.example.com/health",
"method": "GET",
"status_code": 200,
"return_content": true
}
TOML Format:
[[tasks]]
type = "uri"
description = "Check API health endpoint"
url = "https://api.example.com/health"
method = "GET"
status_code = 200
return_content = true
POST request with JSON body:
YAML Format:
- type: uri
description: "Create a new user via API"
url: https://api.example.com/users
method: POST
body: "{\"name\": \"John Doe\", \"email\": \"john@example.com\"}"
headers:
Content-Type: application/json
status_code: 201
JSON Format:
{
"type": "uri",
"description": "Create a new user via API",
"url": "https://api.example.com/users",
"method": "POST",
"body": "{\"name\": \"John Doe\", \"email\": \"john@example.com\"}",
"headers": {
"Content-Type": "application/json"
},
"status_code": 201
}
TOML Format:
[[tasks]]
type = "uri"
description = "Create a new user via API"
url = "https://api.example.com/users"
method = "POST"
body = "{\"name\": \"John Doe\", \"email\": \"john@example.com\"}"
[tasks.headers]
Content-Type = "application/json"
[tasks]]
status_code = 201
Request with authentication:
YAML Format:
- type: uri
description: "Get user profile with authentication"
url: https://api.example.com/profile
method: GET
username: myuser
password: mypassword
return_content: true
JSON Format:
{
"type": "uri",
"description": "Get user profile with authentication",
"url": "https://api.example.com/profile",
"method": "GET",
"username": "myuser",
"password": "mypassword",
"return_content": true
}
TOML Format:
[[tasks]]
type = "uri"
description = "Get user profile with authentication"
url = "https://api.example.com/profile"
method = "GET"
username = "myuser"
password = "mypassword"
return_content = true
Register URI response:
YAML Format:
- type: uri
description: "Get health status"
url: https://api.example.com/health
register: health_response
return_content: true
- type: debug
msg: "The API status code is: {{ health_response.status }}"
JSON Format:
[
{
"type": "uri",
"description": "Get health status",
"url": "https://api.example.com/health",
"register": "health_response",
"return_content": true
},
{
"type": "debug",
"msg": "The API status code is: {{ health_response.status }}"
}
]
TOML Format:
[[tasks]]
type = "uri"
description = "Get health status"
url = "https://api.example.com/health"
register = "health_response"
return_content = true
[[tasks]]
type = "debug"
msg = "The API status code is: {{ health_response.status }}"
Package Management
apt
Description: Debian/Ubuntu package management task
Required Fields:
-
allow_downgrades(bool): Allow downgrades -
allow_unauthenticated(bool): Allow unauthenticated packages -
autoclean(bool): Autoclean package cache -
autoremove(bool): Autoremove unused packages -
cache_valid_time(u32): Cache validity time in seconds -
force(bool): Force installation -
name(String): Package name -
state(PackageState): Package state -
update_cache(bool): Update package cache
Optional Fields:
-
description(Option): Optional description of what this task does Human-readable description of the task’s purpose. Used for documentation and can be displayed in logs or reports.
-
when(Option): Optional condition to determine if the task should run
Examples:
Install a package:
YAML Format:
- type: apt
description: "Install curl package"
name: curl
state: present
JSON Format:
{
"type": "apt",
"description": "Install curl package",
"name": "curl",
"state": "present"
}
TOML Format:
[[tasks]]
type = "apt"
description = "Install curl package"
name = "curl"
state = "present"
Install package with cache update:
YAML Format:
- type: apt
description: "Install nginx with cache update"
name: nginx
state: present
update_cache: true
JSON Format:
{
"type": "apt",
"description": "Install nginx with cache update",
"name": "nginx",
"state": "present",
"update_cache": true
}
TOML Format:
[[tasks]]
type = "apt"
description = "Install nginx with cache update"
name = "nginx"
state = "present"
update_cache = true
Remove a package:
YAML Format:
- type: apt
description: "Remove apache2 package"
name: apache2
state: absent
JSON Format:
{
"type": "apt",
"description": "Remove apache2 package",
"name": "apache2",
"state": "absent"
}
TOML Format:
[[tasks]]
type = "apt"
description = "Remove apache2 package"
name = "apache2"
state = "absent"
Update package to latest version:
YAML Format:
- type: apt
description: "Update vim to latest version"
name: vim
state: latest
update_cache: true
JSON Format:
{
"type": "apt",
"description": "Update vim to latest version",
"name": "vim",
"state": "latest",
"update_cache": true
}
TOML Format:
[[tasks]]
type = "apt"
description = "Update vim to latest version"
name = "vim"
state = "latest"
update_cache = true
gem
Description: Ruby gem management task
Required Fields:
-
executable(String): Ruby executable path -
extra_args(Vec): Extra arguments -
force(bool): Force installation -
gem_executable(String): Gem executable path -
install_doc(bool): Install documentation -
name(String): Gem name -
state(PackageState): Gem state -
user_install(bool): User installation
Optional Fields:
-
description(Option): Optional description of what this task does Human-readable description of the task’s purpose. Used for documentation and can be displayed in logs or reports.
-
version(Option): Version specification -
when(Option): Optional condition to determine if the task should run
Examples:
Install a gem:
YAML Format:
- type: gem
description: "Install bundler gem"
name: bundler
state: present
JSON Format:
{
"type": "gem",
"description": "Install bundler gem",
"name": "bundler",
"state": "present"
}
TOML Format:
[[tasks]]
type = "gem"
description = "Install bundler gem"
name = "bundler"
state = "present"
Install gem with specific version:
YAML Format:
- type: gem
description: "Install Rails 7.0"
name: rails
state: present
version: "7.0.0"
JSON Format:
{
"type": "gem",
"description": "Install Rails 7.0",
"name": "rails",
"state": "present",
"version": "7.0.0"
}
TOML Format:
[[tasks]]
type = "gem"
description = "Install Rails 7.0"
name = "rails"
state = "present"
version = "7.0.0"
Install gem for specific user:
YAML Format:
- type: gem
description: "Install jekyll for user"
name: jekyll
state: present
user_install: true
JSON Format:
{
"type": "gem",
"description": "Install jekyll for user",
"name": "jekyll",
"state": "present",
"user_install": true
}
TOML Format:
[[tasks]]
type = "gem"
description = "Install jekyll for user"
name = "jekyll"
state = "present"
user_install = true
Remove a gem:
YAML Format:
- type: gem
description: "Remove bundler gem"
name: bundler
state: absent
JSON Format:
{
"type": "gem",
"description": "Remove bundler gem",
"name": "bundler",
"state": "absent"
}
TOML Format:
[[tasks]]
type = "gem"
description = "Remove bundler gem"
name = "bundler"
state = "absent"
npm
Description: Node.js package management task
Required Fields:
-
executable(String): NPM executable path -
extra_args(Vec): Extra arguments -
force(bool): Force installation -
global(bool): Global installation -
name(String): Package name -
production(bool): Production only -
state(PackageState): Package state
Optional Fields:
-
description(Option): Optional description of what this task does Human-readable description of the task’s purpose. Used for documentation and can be displayed in logs or reports.
-
registry(Option): Registry URL -
version(Option): Version specification -
when(Option): Optional condition to determine if the task should run
Examples:
Install an npm package:
YAML Format:
- type: npm
description: "Install express package"
name: express
state: present
JSON Format:
{
"type": "npm",
"description": "Install express package",
"name": "express",
"state": "present"
}
TOML Format:
[[tasks]]
type = "npm"
description = "Install express package"
name = "express"
state = "present"
Install package globally:
YAML Format:
- type: npm
description: "Install PM2 globally"
name: pm2
state: present
global: true
JSON Format:
{
"type": "npm",
"description": "Install PM2 globally",
"name": "pm2",
"state": "present",
"global": true
}
TOML Format:
[[tasks]]
type = "npm"
description = "Install PM2 globally"
name = "pm2"
state = "present"
global = true
Install specific version:
YAML Format:
- type: npm
description: "Install React 18"
name: react
state: present
version: "18.2.0"
JSON Format:
{
"type": "npm",
"description": "Install React 18",
"name": "react",
"state": "present",
"version": "18.2.0"
}
TOML Format:
[[tasks]]
type = "npm"
description = "Install React 18"
name = "react"
state = "present"
version = "18.2.0"
Remove an npm package:
YAML Format:
- type: npm
description: "Remove express package"
name: express
state: absent
JSON Format:
{
"type": "npm",
"description": "Remove express package",
"name": "express",
"state": "absent"
}
TOML Format:
[[tasks]]
type = "npm"
description = "Remove express package"
name = "express"
state = "absent"
package
Description: Package management task
Required Fields:
-
name(String): Package name -
state(PackageState): Package state
Optional Fields:
-
description(Option): Optional description of what this task does Human-readable description of the task’s purpose. Used for documentation and can be displayed in logs or reports.
-
manager(Option): Package manager to use (auto-detect if not specified) -
register(Option): Optional variable name to register the task result in -
when(Option): Optional condition to determine if the task should run
Registered Outputs:
changed(bool): Whether any packages were installed or removedpackages(Vec<String>): List of packages affected
Examples:
Install a package:
YAML Format:
- type: package
description: "Install nginx web server"
name: nginx
state: present
JSON Format:
{
"type": "package",
"description": "Install nginx web server",
"name": "nginx",
"state": "present"
}
TOML Format:
[[tasks]]
type = "package"
description = "Install nginx web server"
name = "nginx"
state = "present"
Install with specific package manager:
YAML Format:
- type: package
description: "Install curl using apt"
name: curl
state: present
manager: apt
JSON Format:
{
"type": "package",
"description": "Install curl using apt",
"name": "curl",
"state": "present",
"manager": "apt"
}
TOML Format:
[[tasks]]
type = "package"
description = "Install curl using apt"
name = "curl"
state = "present"
manager = "apt"
Update a package to latest version:
YAML Format:
- type: package
description: "Update vim to latest version"
name: vim
state: latest
JSON Format:
{
"type": "package",
"description": "Update vim to latest version",
"name": "vim",
"state": "latest"
}
TOML Format:
[[tasks]]
type = "package"
description = "Update vim to latest version"
name = "vim"
state = "latest"
Remove a package:
YAML Format:
- type: package
description: "Remove telnet client"
name: telnet
state: absent
JSON Format:
{
"type": "package",
"description": "Remove telnet client",
"name": "telnet",
"state": "absent"
}
TOML Format:
[[tasks]]
type = "package"
description = "Remove telnet client"
name = "telnet"
state = "absent"
Register package installation:
YAML Format:
- type: package
description: "Install git and check if changed"
name: git
state: present
register: git_install
- type: debug
msg: "Git was newly installed"
when: "{{ git_install.changed }}"
JSON Format:
[
{
"type": "package",
"description": "Install git and check if changed",
"name": "git",
"state": "present",
"register": "git_install"
},
{
"type": "debug",
"msg": "Git was newly installed",
"when": "{{ git_install.changed }}"
}
]
TOML Format:
[[tasks]]
type = "package"
description = "Install git and check if changed"
name = "git"
state = "present"
register = "git_install"
[[tasks]]
type = "debug"
msg = "Git was newly installed"
when = "{{ git_install.changed }}"
pacman
Description: Arch Linux package management task
Required Fields:
-
force(bool): Force installation/removal -
name(String): Package name -
reinstall(bool): Force reinstallation -
remove_config(bool): Remove configuration files -
remove_dependencies(bool): Remove dependencies -
state(PackageState): Package state -
update_cache(bool): Update package database -
upgrade(bool): Upgrade system
Optional Fields:
-
description(Option): Optional description of what this task does Human-readable description of the task’s purpose. Used for documentation and can be displayed in logs or reports.
-
when(Option): Optional condition to determine if the task should run
Examples:
Install a package:
YAML Format:
- type: pacman
description: "Install vim package"
name: vim
state: present
JSON Format:
{
"type": "pacman",
"description": "Install vim package",
"name": "vim",
"state": "present"
}
TOML Format:
[[tasks]]
type = "pacman"
description = "Install vim package"
name = "vim"
state = "present"
Install with cache update:
YAML Format:
- type: pacman
description: "Install nginx with cache update"
name: nginx
state: present
update_cache: true
JSON Format:
{
"type": "pacman",
"description": "Install nginx with cache update",
"name": "nginx",
"state": "present",
"update_cache": true
}
TOML Format:
[[tasks]]
type = "pacman"
description = "Install nginx with cache update"
name = "nginx"
state = "present"
update_cache = true
Remove a package:
YAML Format:
- type: pacman
description: "Remove vim package"
name: vim
state: absent
JSON Format:
{
"type": "pacman",
"description": "Remove vim package",
"name": "vim",
"state": "absent"
}
TOML Format:
[[tasks]]
type = "pacman"
description = "Remove vim package"
name = "vim"
state = "absent"
Remove package with dependencies:
YAML Format:
- type: pacman
description: "Remove package with dependencies"
name: old-package
state: absent
remove_dependencies: true
JSON Format:
{
"type": "pacman",
"description": "Remove package with dependencies",
"name": "old-package",
"state": "absent",
"remove_dependencies": true
}
TOML Format:
[[tasks]]
type = "pacman"
description = "Remove package with dependencies"
name = "old-package"
state = "absent"
remove_dependencies = true
pip
Description: Python package management task
Required Fields:
-
executable(String): Python executable path -
extra_args(Vec): Extra arguments -
force(bool): Force installation -
name(String): Package name -
state(PackageState): Package state
Optional Fields:
-
description(Option): Optional description of what this task does Human-readable description of the task’s purpose. Used for documentation and can be displayed in logs or reports.
-
requirements(Option): Requirements file -
version(Option): Version specification -
virtualenv(Option): Virtual environment path -
when(Option): Optional condition to determine if the task should run
Examples:
Install a Python package:
YAML Format:
- type: pip
description: "Install requests package"
name: requests
state: present
JSON Format:
{
"type": "pip",
"description": "Install requests package",
"name": "requests",
"state": "present"
}
TOML Format:
[[tasks]]
type = "pip"
description = "Install requests package"
name = "requests"
state = "present"
Install package in virtual environment:
YAML Format:
- type: pip
description: "Install Django in virtualenv"
name: django
state: present
virtualenv: /opt/myapp/venv
JSON Format:
{
"type": "pip",
"description": "Install Django in virtualenv",
"name": "django",
"state": "present",
"virtualenv": "/opt/myapp/venv"
}
TOML Format:
[[tasks]]
type = "pip"
description = "Install Django in virtualenv"
name = "django"
state = "present"
virtualenv = "/opt/myapp/venv"
Install specific version:
YAML Format:
- type: pip
description: "Install Flask 2.0"
name: flask
state: present
version: "2.0.0"
JSON Format:
{
"type": "pip",
"description": "Install Flask 2.0",
"name": "flask",
"state": "present",
"version": "2.0.0"
}
TOML Format:
[[tasks]]
type = "pip"
description = "Install Flask 2.0"
name = "flask"
state = "present"
version = "2.0.0"
Remove a Python package:
YAML Format:
- type: pip
description: "Remove requests package"
name: requests
state: absent
JSON Format:
{
"type": "pip",
"description": "Remove requests package",
"name": "requests",
"state": "absent"
}
TOML Format:
[[tasks]]
type = "pip"
description = "Remove requests package"
name = "requests"
state = "absent"
yum
Description: RHEL/CentOS/Fedora package management task
Required Fields:
-
allow_downgrades(bool): Allow downgrades -
disable_excludes(bool): Disable excludes -
disable_gpg_check(bool): Disable GPG check -
force(bool): Force installation -
install_recommended(bool): Install recommended packages -
install_suggested(bool): Install suggested packages -
name(String): Package name -
state(PackageState): Package state -
update_cache(bool): Update package cache
Optional Fields:
-
description(Option): Optional description of what this task does Human-readable description of the task’s purpose. Used for documentation and can be displayed in logs or reports.
-
when(Option): Optional condition to determine if the task should run
Examples:
Install a package:
YAML Format:
- type: yum
description: "Install nginx web server"
name: nginx
state: present
JSON Format:
{
"type": "yum",
"description": "Install nginx web server",
"name": "nginx",
"state": "present"
}
TOML Format:
[[tasks]]
type = "yum"
description = "Install nginx web server"
name = "nginx"
state = "present"
Install with cache update:
YAML Format:
- type: yum
description: "Install curl with cache update"
name: curl
state: present
update_cache: true
JSON Format:
{
"type": "yum",
"description": "Install curl with cache update",
"name": "curl",
"state": "present",
"update_cache": true
}
TOML Format:
[[tasks]]
type = "yum"
description = "Install curl with cache update"
name = "curl"
state = "present"
update_cache = true
Remove a package:
YAML Format:
- type: yum
description: "Remove telnet package"
name: telnet
state: absent
JSON Format:
{
"type": "yum",
"description": "Remove telnet package",
"name": "telnet",
"state": "absent"
}
TOML Format:
[[tasks]]
type = "yum"
description = "Remove telnet package"
name = "telnet"
state = "absent"
zypper
Description: SUSE package management task
Required Fields:
-
allow_downgrades(bool): Allow downgrades -
allow_vendor_change(bool): Allow vendor changes -
disable_gpg_check(bool): Disable GPG check -
force(bool): Force installation -
name(String): Package name -
state(PackageState): Package state -
update_cache(bool): Update package cache
Optional Fields:
-
description(Option): Optional description of what this task does Human-readable description of the task’s purpose. Used for documentation and can be displayed in logs or reports.
-
when(Option): Optional condition to determine if the task should run
Examples:
Install a package:
YAML Format:
- type: zypper
description: "Install apache web server"
name: apache2
state: present
JSON Format:
{
"type": "zypper",
"description": "Install apache web server",
"name": "apache2",
"state": "present"
}
TOML Format:
[[tasks]]
type = "zypper"
description = "Install apache web server"
name = "apache2"
state = "present"
Install with cache update:
YAML Format:
- type: zypper
description: "Install vim with repository refresh"
name: vim
state: present
update_cache: true
JSON Format:
{
"type": "zypper",
"description": "Install vim with repository refresh",
"name": "vim",
"state": "present",
"update_cache": true
}
TOML Format:
[[tasks]]
type = "zypper"
description = "Install vim with repository refresh"
name = "vim"
state = "present"
update_cache = true
Remove a package:
YAML Format:
- type: zypper
description: "Remove telnet package"
name: telnet
state: absent
JSON Format:
{
"type": "zypper",
"description": "Remove telnet package",
"name": "telnet",
"state": "absent"
}
TOML Format:
[[tasks]]
type = "zypper"
description = "Remove telnet package"
name = "telnet"
state = "absent"
Security & Access
authorizedkey
Description: SSH authorized key management task
Required Fields:
-
create_ssh_dir(bool): Whether to create .ssh directory if it doesn’t exist -
manage_dir(bool): Whether to manage SSH directory permissions -
state(AuthorizedKeyState): SSH state (present/absent) -
unique(bool): Whether to deduplicate keys -
user(String): Target user for SSH key management -
validate_key(bool): Whether to validate key format
Optional Fields:
-
comment(Option): Comment to identify this key -
description(Option): Optional description of what this task does -
key(Option): SSH public key content (inline) -
key_file(Option): Path to SSH public key file -
key_options(Option): Key options (comma-separated list) -
path(Option): Path to authorized_keys file (defaults to ~/.ssh/authorized_keys) -
when(Option): Optional condition to determine if the task should run
Examples:
Add SSH public key:
YAML Format:
- type: authorized_key
description: "Add SSH key for admin user"
user: admin
state: present
key: "ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC7vbqajDhS... user@host"
JSON Format:
{
"type": "authorized_key",
"description": "Add SSH key for admin user",
"user": "admin",
"state": "present",
"key": "ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC7vbqajDhS... user@host"
}
TOML Format:
[[tasks]]
type = "authorized_key"
description = "Add SSH key for admin user"
user = "admin"
state = "present"
key = "ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC7vbqajDhS... user@host"
Add SSH key from file:
YAML Format:
- type: authorized_key
description: "Add SSH key from file"
user: deploy
state: present
key_file: /tmp/id_rsa.pub
comment: "Deployment key"
JSON Format:
{
"type": "authorized_key",
"description": "Add SSH key from file",
"user": "deploy",
"state": "present",
"key_file": "/tmp/id_rsa.pub",
"comment": "Deployment key"
}
TOML Format:
[[tasks]]
type = "authorized_key"
description = "Add SSH key from file"
user = "deploy"
state = "present"
key_file = "/tmp/id_rsa.pub"
comment = "Deployment key"
Add SSH key with restrictions:
YAML Format:
- type: authorized_key
description: "Add restricted SSH key"
user: backup
state: present
key: "ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC7vbqajDhS... backup@host"
key_options: "command=\"/usr/local/bin/backup.sh\",no-port-forwarding,no-X11-forwarding,no-agent-forwarding"
JSON Format:
{
"type": "authorized_key",
"description": "Add restricted SSH key",
"user": "backup",
"state": "present",
"key": "ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC7vbqajDhS... backup@host",
"key_options": "command=\"/usr/local/bin/backup.sh\",no-port-forwarding,no-X11-forwarding,no-agent-forwarding"
}
TOML Format:
[[tasks]]
type = "authorized_key"
description = "Add restricted SSH key"
user = "backup"
state = "present"
key = "ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC7vbqajDhS... backup@host"
key_options = "command=\"/usr/local/bin/backup.sh\",no-port-forwarding,no-X11-forwarding,no-agent-forwarding"
Remove SSH key:
YAML Format:
- type: authorized_key
description: "Remove SSH key"
user: olduser
state: absent
key: "ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC7vbqajDhS... olduser@host"
JSON Format:
{
"type": "authorized_key",
"description": "Remove SSH key",
"user": "olduser",
"state": "absent",
"key": "ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC7vbqajDhS... olduser@host"
}
TOML Format:
[[tasks]]
type = "authorized_key"
description = "Remove SSH key"
user = "olduser"
state = "absent"
key = "ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC7vbqajDhS... olduser@host"
firewalld
Description: Firewalld firewall management task
Required Fields:
-
check_running(bool): Whether to check if firewalld is running -
permanent(bool): Whether to make changes permanent -
reload(bool): Whether to reload firewall after changes -
state(FirewalldState): Firewall state (present/absent) -
zone(String): Zone to manage (defaults to “public”)
Optional Fields:
-
description(Option): Optional description of what this task does -
port(Option): Port to manage (e.g., “8080/tcp”, “53/udp”) -
rich_rule(Option): Rich rule to manage -
service(Option): Service to manage (e.g., “http”, “ssh”) -
when(Option): Optional condition to determine if the task should run
Examples:
Allow SSH service:
YAML Format:
- type: firewalld
description: "Allow SSH access"
state: present
service: ssh
zone: public
permanent: true
JSON Format:
{
"type": "firewalld",
"description": "Allow SSH access",
"state": "present",
"service": "ssh",
"zone": "public",
"permanent": true
}
TOML Format:
[[tasks]]
type = "firewalld"
description = "Allow SSH access"
state = "present"
service = "ssh"
zone = "public"
permanent = true
Allow custom port:
YAML Format:
- type: firewalld
description: "Allow web traffic on port 8080"
state: present
port: "8080/tcp"
zone: public
permanent: true
JSON Format:
{
"type": "firewalld",
"description": "Allow web traffic on port 8080",
"state": "present",
"port": "8080/tcp",
"zone": "public",
"permanent": true
}
TOML Format:
[[tasks]]
type = "firewalld"
description = "Allow web traffic on port 8080"
state = "present"
port = "8080/tcp"
zone = "public"
permanent = true
Add rich rule:
YAML Format:
- type: firewalld
description: "Allow traffic from specific IP"
state: present
rich_rule: 'rule family="ipv4" source address="192.168.1.100" accept'
zone: public
permanent: true
JSON Format:
{
"type": "firewalld",
"description": "Allow traffic from specific IP",
"state": "present",
"rich_rule": "rule family=\"ipv4\" source address=\"192.168.1.100\" accept",
"zone": "public",
"permanent": true
}
TOML Format:
[[tasks]]
type = "firewalld"
description = "Allow traffic from specific IP"
state = "present"
rich_rule = 'rule family="ipv4" source address="192.168.1.100" accept'
zone = "public"
permanent = true
Remove firewall rule:
YAML Format:
- type: firewalld
description: "Remove SSH access"
state: absent
service: ssh
zone: public
permanent: true
JSON Format:
{
"type": "firewalld",
"description": "Remove SSH access",
"state": "absent",
"service": "ssh",
"zone": "public",
"permanent": true
}
TOML Format:
[[tasks]]
type = "firewalld"
description = "Remove SSH access"
state = "absent"
service = "ssh"
zone = "public"
permanent = true
iptables
Description: iptables firewall management task
Required Fields:
-
chain(String): Chain to manage (INPUT/OUTPUT/FORWARD/PREROUTING/POSTROUTING) -
check_available(bool): Whether to check if iptables is available -
extra_args(Vec): Additional iptables arguments -
ipv6(bool): IPv6 mode (use ip6tables instead of iptables) -
protocol(String): Protocol (tcp/udp/icmp/all) -
state(IptablesState): iptables state (present/absent) -
table(String): Table to manage (filter/nat/mangle/raw/security) -
target(String): Target/jump action (ACCEPT/DROP/REJECT/LOG/MASQUERADE)
Optional Fields:
-
description(Option): Optional description of what this task does -
destination(Option): Destination IP/network (with optional mask) -
dport(Option): Destination port (for tcp/udp) -
in_interface(Option): Input interface -
out_interface(Option): Output interface -
source(Option): Source IP/network (with optional mask) -
sport(Option): Source port (for tcp/udp) -
when(Option): Optional condition to determine if the task should run
Examples:
Allow SSH access:
YAML Format:
- type: iptables
description: "Allow SSH access"
state: present
table: filter
chain: INPUT
protocol: tcp
dport: "22"
target: ACCEPT
JSON Format:
{
"type": "iptables",
"description": "Allow SSH access",
"state": "present",
"table": "filter",
"chain": "INPUT",
"protocol": "tcp",
"dport": "22",
"target": "ACCEPT"
}
TOML Format:
[[tasks]]
type = "iptables"
description = "Allow SSH access"
state = "present"
table = "filter"
chain = "INPUT"
protocol = "tcp"
dport = "22"
target = "ACCEPT"
Block specific IP address:
YAML Format:
- type: iptables
description: "Block specific IP address"
state: present
table: filter
chain: INPUT
source: 192.168.1.100
target: DROP
JSON Format:
{
"type": "iptables",
"description": "Block specific IP address",
"state": "present",
"table": "filter",
"chain": "INPUT",
"source": "192.168.1.100",
"target": "DROP"
}
TOML Format:
[[tasks]]
type = "iptables"
description = "Block specific IP address"
state = "present"
table = "filter"
chain = "INPUT"
source = "192.168.1.100"
target = "DROP"
Allow HTTP and HTTPS traffic:
YAML Format:
- type: iptables
description: "Allow web traffic"
state: present
table: filter
chain: INPUT
protocol: tcp
dport: "80,443"
target: ACCEPT
JSON Format:
{
"type": "iptables",
"description": "Allow web traffic",
"state": "present",
"table": "filter",
"chain": "INPUT",
"protocol": "tcp",
"dport": "80,443",
"target": "ACCEPT"
}
TOML Format:
[[tasks]]
type = "iptables"
description = "Allow web traffic"
state = "present"
table = "filter"
chain = "INPUT"
protocol = "tcp"
dport = "80,443"
target = "ACCEPT"
Remove iptables rule:
YAML Format:
- type: iptables
description: "Remove SSH blocking rule"
state: absent
table: filter
chain: INPUT
protocol: tcp
dport: "22"
target: DROP
JSON Format:
{
"type": "iptables",
"description": "Remove SSH blocking rule",
"state": "absent",
"table": "filter",
"chain": "INPUT",
"protocol": "tcp",
"dport": "22",
"target": "DROP"
}
TOML Format:
[[tasks]]
type = "iptables"
description = "Remove SSH blocking rule"
state = "absent"
table = "filter"
chain = "INPUT"
protocol = "tcp"
dport = "22"
target = "DROP"
selinux
Description: SELinux policy management task
Required Fields:
-
follow(bool): Whether to follow symlinks -
ignore_missing(bool): Whether to ignore missing files -
persistent(bool): Whether to make changes persistent -
recurse(bool): Whether to recurse into directories -
state(SelinuxState): SELinux state (present/absent/enforcing/permissive/disabled)
Optional Fields:
-
boolean(Option): SELinux boolean to manage -
context(Option): SELinux context to set -
description(Option): Optional description of what this task does -
policy(Option): Policy type (targeted/mls) -
serange(Option): SELinux level/range to set -
serole(Option): SELinux role to set -
setype(Option): SELinux type to set -
seuser(Option): SELinux user to set -
target(Option): File/directory to set context on -
when(Option): Optional condition to determine if the task should run
Examples:
Enable SELinux boolean:
YAML Format:
- type: selinux
description: "Enable httpd_can_network_connect"
state: on
boolean: httpd_can_network_connect
JSON Format:
{
"type": "selinux",
"description": "Enable httpd_can_network_connect",
"state": "on",
"boolean": "httpd_can_network_connect"
}
TOML Format:
[[tasks]]
type = "selinux"
description = "Enable httpd_can_network_connect"
state = "on"
boolean = "httpd_can_network_connect"
Set file context:
YAML Format:
- type: selinux
description: "Set httpd context for web directory"
state: context
target: /var/www/html
setype: httpd_sys_content_t
recurse: true
JSON Format:
{
"type": "selinux",
"description": "Set httpd context for web directory",
"state": "context",
"target": "/var/www/html",
"setype": "httpd_sys_content_t",
"recurse": true
}
TOML Format:
[[tasks]]
type = "selinux"
description = "Set httpd context for web directory"
state = "context"
target = "/var/www/html"
setype = "httpd_sys_content_t"
recurse = true
Set SELinux to enforcing mode:
YAML Format:
- type: selinux
description: "Set SELinux to enforcing mode"
state: enforcing
JSON Format:
{
"type": "selinux",
"description": "Set SELinux to enforcing mode",
"state": "enforcing"
}
TOML Format:
[[tasks]]
type = "selinux"
description = "Set SELinux to enforcing mode"
state = "enforcing"
Restore file contexts:
YAML Format:
- type: selinux
description: "Restore SELinux contexts"
state: restorecon
target: /etc/httpd
recurse: true
JSON Format:
{
"type": "selinux",
"description": "Restore SELinux contexts",
"state": "restorecon",
"target": "/etc/httpd",
"recurse": true
}
TOML Format:
[[tasks]]
type = "selinux"
description = "Restore SELinux contexts"
state = "restorecon"
target = "/etc/httpd"
recurse = true
sudoers
Description: Sudoers configuration management task
Required Fields:
-
backup(bool): Backup file before modification -
commands(Vec): Commands to allow (defaults to ALL) -
group(bool): Whether this is a group (prefix with %) -
hosts(Vec): Hosts to allow (defaults to ALL) -
name(String): User or group to grant sudo privileges -
noexec(bool): NOEXEC option (prevent shell escapes) -
nopasswd(bool): NOPASSWD option (don’t require password) -
setenv(bool): SETENV option (allow environment variable setting) -
state(SudoersState): Sudoers state (present/absent) -
validate(bool): Whether to validate sudoers syntax after changes
Optional Fields:
-
description(Option): Optional description of what this task does -
path(Option): Path to sudoers file (defaults to /etc/sudoers) -
runas(Option): Run as user (defaults to ALL) -
when(Option): Optional condition to determine if the task should run
Examples:
Grant sudo access to user:
YAML Format:
- type: sudoers
description: "Grant sudo access to admin user"
state: present
name: admin
commands: ["ALL"]
hosts: ["ALL"]
JSON Format:
{
"type": "sudoers",
"description": "Grant sudo access to admin user",
"state": "present",
"name": "admin",
"commands": ["ALL"],
"hosts": ["ALL"]
}
TOML Format:
[[tasks]]
type = "sudoers"
description = "Grant sudo access to admin user"
state = "present"
name = "admin"
commands = ["ALL"]
hosts = ["ALL"]
Grant sudo access to group:
YAML Format:
- type: sudoers
description: "Grant sudo access to wheel group"
state: present
name: wheel
group: true
commands: ["ALL"]
hosts: ["ALL"]
JSON Format:
{
"type": "sudoers",
"description": "Grant sudo access to wheel group",
"state": "present",
"name": "wheel",
"group": true,
"commands": ["ALL"],
"hosts": ["ALL"]
}
TOML Format:
[[tasks]]
type = "sudoers"
description = "Grant sudo access to wheel group"
state = "present"
name = "wheel"
group = true
commands = ["ALL"]
hosts = ["ALL"]
Grant passwordless sudo for specific commands:
YAML Format:
- type: sudoers
description: "Grant passwordless sudo for service management"
state: present
name: deploy
commands: ["/usr/bin/systemctl", "/usr/bin/service"]
hosts: ["ALL"]
nopasswd: true
JSON Format:
{
"type": "sudoers",
"description": "Grant passwordless sudo for service management",
"state": "present",
"name": "deploy",
"commands": ["/usr/bin/systemctl", "/usr/bin/service"],
"hosts": ["ALL"],
"nopasswd": true
}
TOML Format:
[[tasks]]
type = "sudoers"
description = "Grant passwordless sudo for service management"
state = "present"
name = "deploy"
commands = ["/usr/bin/systemctl", "/usr/bin/service"]
hosts = ["ALL"]
nopasswd = true
Remove sudo privileges:
YAML Format:
- type: sudoers
description: "Remove sudo access from user"
state: absent
name: olduser
JSON Format:
{
"type": "sudoers",
"description": "Remove sudo access from user",
"state": "absent",
"name": "olduser"
}
TOML Format:
[[tasks]]
type = "sudoers"
description = "Remove sudo access from user"
state = "absent"
name = "olduser"
ufw
Description: UFW firewall management task
Required Fields:
state(UfwState): UFW state
Optional Fields:
-
default(Option): Default policy for chains -
description(Option): Optional description of what this task does -
direction(Option): Direction (in/out) -
from(Option): Source IP/network (for from parameter) -
interface(Option): Interface to apply rule to -
logging(Option): Logging level -
port(Option): Port to manage (e.g., “80”, “443/tcp”, “53/udp”) -
proto(Option): Protocol (tcp/udp) -
rule(Option): Rule to manage -
to(Option): Destination IP/network (for to parameter) -
when(Option): Optional condition to determine if the task should run
Examples:
Enable UFW firewall:
YAML Format:
- type: ufw
description: "Enable UFW firewall"
state: enabled
JSON Format:
{
"type": "ufw",
"description": "Enable UFW firewall",
"state": "enabled"
}
TOML Format:
[[tasks]]
type = "ufw"
description = "Enable UFW firewall"
state = "enabled"
Allow SSH access:
YAML Format:
- type: ufw
description: "Allow SSH access"
state: allow
port: "22"
proto: tcp
JSON Format:
{
"type": "ufw",
"description": "Allow SSH access",
"state": "allow",
"port": "22",
"proto": "tcp"
}
TOML Format:
[[tasks]]
type = "ufw"
description = "Allow SSH access"
state = "allow"
port = "22"
proto = "tcp"
Allow HTTP and HTTPS:
YAML Format:
- type: ufw
description: "Allow web traffic"
state: allow
port: "80,443"
proto: tcp
JSON Format:
{
"type": "ufw",
"description": "Allow web traffic",
"state": "allow",
"port": "80,443",
"proto": "tcp"
}
TOML Format:
[[tasks]]
type = "ufw"
description = "Allow web traffic"
state = "allow"
port = "80,443"
proto = "tcp"
Deny specific IP address:
YAML Format:
- type: ufw
description: "Block specific IP address"
state: deny
from: 192.168.1.100
JSON Format:
{
"type": "ufw",
"description": "Block specific IP address",
"state": "deny",
"from": "192.168.1.100"
}
TOML Format:
[[tasks]]
type = "ufw"
description = "Block specific IP address"
state = "deny"
from = "192.168.1.100"
Source Control
git
Description: Git repository management task
Required Fields:
-
accept_hostkey(bool): Accept host keyWill ensure or not that -o StrictHostKeyChecking=no is present as an ssh option.
-
clone(bool): Whether to clone if repository doesn’t existIf false, do not clone the repository even if it does not exist locally.
-
dest(String): Destination directoryThe path of where the repository should be checked out.
-
force(bool): Whether to force checkoutIf true, any modified files in the working repository will be discarded.
-
recursive(bool): Whether to clone recursively (include submodules)If false, repository will be cloned without the –recursive option.
-
remote(String): Remote nameName of the remote.
-
repo(String): Git repository URLThe git, SSH, or HTTP(S) protocol address of the git repository.
-
update(bool): Whether to update the repositoryIf false, do not retrieve new revisions from the origin repository.
-
version(String): Version to check outWhat version of the repository to check out. This can be the literal string HEAD, a branch name, a tag name, or a SHA-1 hash.
Optional Fields:
-
depth(Option): Depth for shallow clone Create a shallow clone with a history truncated to the specified number of revisions.
-
description(Option): Optional description of what this task does Human-readable description of the task’s purpose. Used for documentation and can be displayed in logs or reports.
-
key_file(Option): SSH key file Specify an optional private key file path to use for the checkout.
-
register(Option): Optional variable name to register the task result in -
ssh_opts(Option): SSH options Options git will pass to ssh when used as protocol.
-
when(Option): Optional condition to determine if the task should run
Registered Outputs:
after(String): The SHA-1 hash after the task has runbefore(String): The SHA-1 hash before the task has runchanged(bool): Whether the repository was updated or cloned
Examples:
Clone a repository:
YAML Format:
- type: git
description: "Clone application repository"
repo: https://github.com/user/myapp.git
dest: /opt/myapp
version: main
JSON Format:
{
"type": "git",
"description": "Clone application repository",
"repo": "https://github.com/user/myapp.git",
"dest": "/opt/myapp",
"version": "main"
}
TOML Format:
[[tasks]]
type = "git"
description = "Clone application repository"
repo = "https://github.com/user/myapp.git"
dest = "/opt/myapp"
version = "main"
Clone specific branch:
YAML Format:
- type: git
description: "Clone development branch"
repo: https://github.com/user/myapp.git
dest: /opt/myapp-dev
version: develop
JSON Format:
{
"type": "git",
"description": "Clone development branch",
"repo": "https://github.com/user/myapp.git",
"dest": "/opt/myapp-dev",
"version": "develop"
}
TOML Format:
[[tasks]]
type = "git"
description = "Clone development branch"
repo = "https://github.com/user/myapp.git"
dest = "/opt/myapp-dev"
version = "develop"
Clone with submodules:
YAML Format:
- type: git
description: "Clone repository with submodules"
repo: https://github.com/user/myapp.git
dest: /opt/myapp
recursive: true
JSON Format:
{
"type": "git",
"description": "Clone repository with submodules",
"repo": "https://github.com/user/myapp.git",
"dest": "/opt/myapp",
"recursive": true
}
TOML Format:
[[tasks]]
type = "git"
description = "Clone repository with submodules"
repo = "https://github.com/user/myapp.git"
dest = "/opt/myapp"
recursive = true
Clone specific commit:
YAML Format:
- type: git
description: "Clone specific commit"
repo: https://github.com/user/myapp.git
dest: /opt/myapp
version: abc123def456
JSON Format:
{
"type": "git",
"description": "Clone specific commit",
"repo": "https://github.com/user/myapp.git",
"dest": "/opt/myapp",
"version": "abc123def456"
}
TOML Format:
[[tasks]]
type = "git"
description = "Clone specific commit"
repo = "https://github.com/user/myapp.git"
dest = "/opt/myapp"
version = "abc123def456"
Register repository state:
YAML Format:
- type: git
description: "Clone rust source"
repo: https://github.com/rust-lang/rust.git
dest: /opt/rust
register: rust_repo
- type: debug
msg: "Rust repo is at {{ rust_repo.after }}"
JSON Format:
[
{
"type": "git",
"description": "Clone rust source",
"repo": "https://github.com/rust-lang/rust.git",
"dest": "/opt/rust",
"register": "rust_repo"
},
{
"type": "debug",
"msg": "Rust repo is at {{ rust_repo.after }}"
}
]
TOML Format:
[[tasks]]
type = "git"
description = "Clone rust source"
repo = "https://github.com/rust-lang/rust.git"
dest = "/opt/rust"
register = "rust_repo"
[[tasks]]
type = "debug"
msg = "Rust repo is at {{ rust_repo.after }}"
System Administration
cron
Description: Cron job management task
Required Fields:
-
day(String): Day of month (1-31, or * for any) -
hour(String): Hour (0-23, or * for any) -
job(String): Command to execute -
minute(String): Minute (0-59, or * for any) -
month(String): Month (1-12, or * for any) -
name(String): Unique name for this cron job -
state(CronState): Cron job state -
user(String): User to run the job as -
weekday(String): Day of week (0-7, or * for any)
Optional Fields:
-
comment(Option): Optional comment/description -
description(Option): Optional description of what this task does Human-readable description of the task’s purpose. Used for documentation and can be displayed in logs or reports.
-
when(Option): Optional condition to determine if the task should run
Examples:
Create a cron job:
YAML Format:
- type: cron
description: "Create daily backup cron job"
name: daily-backup
state: present
user: root
minute: "0"
hour: "2"
day: "*"
month: "*"
weekday: "*"
job: "/usr/local/bin/backup.sh"
JSON Format:
{
"type": "cron",
"description": "Create daily backup cron job",
"name": "daily-backup",
"state": "present",
"user": "root",
"minute": "0",
"hour": "2",
"day": "*",
"month": "*",
"weekday": "*",
"job": "/usr/local/bin/backup.sh"
}
TOML Format:
[[tasks]]
type = "cron"
description = "Create daily backup cron job"
name = "daily-backup"
state = "present"
user = "root"
minute = "0"
hour = "2"
day = "*"
month = "*"
weekday = "*"
job = "/usr/local/bin/backup.sh"
Create cron job with specific schedule:
YAML Format:
- type: cron
description: "Weekly maintenance on Mondays"
name: weekly-maintenance
state: present
user: root
minute: "0"
hour: "9"
day: "*"
month: "*"
weekday: "1"
job: "/usr/local/bin/maintenance.sh"
comment: "Weekly system maintenance"
JSON Format:
{
"type": "cron",
"description": "Weekly maintenance on Mondays",
"name": "weekly-maintenance",
"state": "present",
"user": "root",
"minute": "0",
"hour": "9",
"day": "*",
"month": "*",
"weekday": "1",
"job": "/usr/local/bin/maintenance.sh",
"comment": "Weekly system maintenance"
}
TOML Format:
[[tasks]]
type = "cron"
description = "Weekly maintenance on Mondays"
name = "weekly-maintenance"
state = "present"
user = "root"
minute = "0"
hour = "9"
day = "*"
month = "*"
weekday = "1"
job = "/usr/local/bin/maintenance.sh"
comment = "Weekly system maintenance"
Remove a cron job:
YAML Format:
- type: cron
description: "Remove daily backup cron job"
name: daily-backup
state: absent
JSON Format:
{
"type": "cron",
"description": "Remove daily backup cron job",
"name": "daily-backup",
"state": "absent"
}
TOML Format:
[[tasks]]
type = "cron"
description = "Remove daily backup cron job"
name = "daily-backup"
state = "absent"
Cron job with complex schedule:
YAML Format:
- type: cron
description: "Monitor service every 15 minutes during business hours"
name: service-monitor
state: present
user: monitor
minute: "*/15"
hour: "9-17"
day: "1-5"
month: "*"
weekday: "*"
job: "/usr/local/bin/check-service.sh"
comment: "Business hours service monitoring"
JSON Format:
{
"type": "cron",
"description": "Monitor service every 15 minutes during business hours",
"name": "service-monitor",
"state": "present",
"user": "monitor",
"minute": "*/15",
"hour": "9-17",
"day": "1-5",
"month": "*",
"weekday": "*",
"job": "/usr/local/bin/check-service.sh",
"comment": "Business hours service monitoring"
}
TOML Format:
[[tasks]]
type = "cron"
description = "Monitor service every 15 minutes during business hours"
name = "service-monitor"
state = "present"
user = "monitor"
minute = "*/15"
hour = "9-17"
day = "1-5"
month = "*"
weekday = "*"
job = "/usr/local/bin/check-service.sh"
comment = "Business hours service monitoring"
filesystem
Description: Filesystem creation/deletion task
Required Fields:
-
dev(String): Device path -
force(bool): Force filesystem creation (dangerous!) -
opts(Vec): Additional mkfs options -
state(FilesystemState): Filesystem state
Optional Fields:
-
description(Option): Optional description of what this task does Human-readable description of the task’s purpose. Used for documentation and can be displayed in logs or reports.
-
fstype(Option): Filesystem type (ext4, xfs, btrfs, etc.) -
when(Option): Optional condition to determine if the task should run
Examples:
Create an ext4 filesystem:
YAML Format:
- type: filesystem
description: "Create ext4 filesystem"
dev: /dev/sdb1
state: present
fstype: ext4
JSON Format:
{
"type": "filesystem",
"description": "Create ext4 filesystem",
"dev": "/dev/sdb1",
"state": "present",
"fstype": "ext4"
}
TOML Format:
[[tasks]]
type = "filesystem"
description = "Create ext4 filesystem"
dev = "/dev/sdb1"
state = "present"
fstype = "ext4"
Create an XFS filesystem:
YAML Format:
- type: filesystem
description: "Create XFS filesystem"
dev: /dev/sdc1
state: present
fstype: xfs
opts: ["-f", "-i", "size=512"]
JSON Format:
{
"type": "filesystem",
"description": "Create XFS filesystem",
"dev": "/dev/sdc1",
"state": "present",
"fstype": "xfs",
"opts": ["-f", "-i", "size=512"]
}
TOML Format:
[[tasks]]
type = "filesystem"
description = "Create XFS filesystem"
dev = "/dev/sdc1"
state = "present"
fstype = "xfs"
opts = ["-f", "-i", "size=512"]
Create a Btrfs filesystem:
YAML Format:
- type: filesystem
description: "Create Btrfs filesystem"
dev: /dev/sdd1
state: present
fstype: btrfs
JSON Format:
{
"type": "filesystem",
"description": "Create Btrfs filesystem",
"dev": "/dev/sdd1",
"state": "present",
"fstype": "btrfs"
}
TOML Format:
[[tasks]]
type = "filesystem"
description = "Create Btrfs filesystem"
dev = "/dev/sdd1"
state = "present"
fstype = "btrfs"
Force create filesystem:
YAML Format:
- type: filesystem
description: "Force create ext4 filesystem"
dev: /dev/sde1
state: present
fstype: ext4
force: true
JSON Format:
{
"type": "filesystem",
"description": "Force create ext4 filesystem",
"dev": "/dev/sde1",
"state": "present",
"fstype": "ext4",
"force": true
}
TOML Format:
[[tasks]]
type = "filesystem"
description = "Force create ext4 filesystem"
dev = "/dev/sde1"
state = "present"
fstype = "ext4"
force = true
group
Description: Group management task
Required Fields:
-
name(String): Group name -
state(GroupState): Group state -
system(bool): Whether group is a system group
Optional Fields:
-
description(Option): Optional description of what this task does Human-readable description of the task’s purpose. Used for documentation and can be displayed in logs or reports.
-
gid(Option): Group ID -
when(Option): Optional condition to determine if the task should run
Examples:
Create a group:
YAML Format:
- type: group
description: "Create a web application group"
name: webapp
state: present
gid: 1001
JSON Format:
{
"type": "group",
"description": "Create a web application group",
"name": "webapp",
"state": "present",
"gid": 1001
}
TOML Format:
[[tasks]]
type = "group"
description = "Create a web application group"
name = "webapp"
state = "present"
gid = 1001
Create a system group:
YAML Format:
- type: group
description: "Create a system group for nginx"
name: nginx
state: present
system: true
JSON Format:
{
"type": "group",
"description": "Create a system group for nginx",
"name": "nginx",
"state": "present",
"system": true
}
TOML Format:
[[tasks]]
type = "group"
description = "Create a system group for nginx"
name = "nginx"
state = "present"
system = true
Remove a group:
YAML Format:
- type: group
description: "Remove the old group"
name: oldgroup
state: absent
JSON Format:
{
"type": "group",
"description": "Remove the old group",
"name": "oldgroup",
"state": "absent"
}
TOML Format:
[[tasks]]
type = "group"
description = "Remove the old group"
name = "oldgroup"
state = "absent"
hostname
Description: System hostname management task
Required Fields:
-
name(String): Desired hostname -
persist(bool): Whether to persist hostname to /etc/hostname
Optional Fields:
-
description(Option): Optional description of what this task does Human-readable description of the task’s purpose. Used for documentation and can be displayed in logs or reports.
-
when(Option): Optional condition to determine if the task should run
Examples:
Set system hostname:
YAML Format:
- type: hostname
description: "Set system hostname"
name: web-server-01
persist: true
JSON Format:
{
"type": "hostname",
"description": "Set system hostname",
"name": "web-server-01",
"persist": true
}
TOML Format:
[[tasks]]
type = "hostname"
description = "Set system hostname"
name = "web-server-01"
persist = true
Set hostname temporarily:
YAML Format:
- type: hostname
description: "Set temporary hostname"
name: temp-server
persist: false
JSON Format:
{
"type": "hostname",
"description": "Set temporary hostname",
"name": "temp-server",
"persist": false
}
TOML Format:
[[tasks]]
type = "hostname"
description = "Set temporary hostname"
name = "temp-server"
persist = false
Set hostname with domain:
YAML Format:
- type: hostname
description: "Set fully qualified hostname"
name: app.example.com
persist: true
JSON Format:
{
"type": "hostname",
"description": "Set fully qualified hostname",
"name": "app.example.com",
"persist": true
}
TOML Format:
[[tasks]]
type = "hostname"
description = "Set fully qualified hostname"
name = "app.example.com"
persist = true
mount
Description: Filesystem mounting task
Required Fields:
-
fstab(bool): Whether to update /etc/fstab -
opts(Vec): Mount options -
path(String): Mount point path -
recursive(bool): Whether to mount recursively -
src(String): Device to mount (device path, UUID, LABEL, etc.) -
state(MountState): Mount state
Optional Fields:
-
description(Option): Optional description of what this task does Human-readable description of the task’s purpose. Used for documentation and can be displayed in logs or reports.
-
fstype(Option): Filesystem type -
when(Option): Optional condition to determine if the task should run
Examples:
Mount a filesystem:
YAML Format:
- type: mount
description: "Mount data partition"
path: /mnt/data
state: mounted
src: /dev/sdb1
fstype: ext4
opts: ["defaults"]
JSON Format:
{
"type": "mount",
"description": "Mount data partition",
"path": "/mnt/data",
"state": "mounted",
"src": "/dev/sdb1",
"fstype": "ext4",
"opts": ["defaults"]
}
TOML Format:
[[tasks]]
type = "mount"
description = "Mount data partition"
path = "/mnt/data"
state = "mounted"
src = "/dev/sdb1"
fstype = "ext4"
opts = ["defaults"]
Mount with fstab entry:
YAML Format:
- type: mount
description: "Mount NFS share with fstab entry"
path: /mnt/nfs
state: present
src: 192.168.1.100:/export/data
fstype: nfs
opts: ["defaults", "vers=4"]
fstab: true
JSON Format:
{
"type": "mount",
"description": "Mount NFS share with fstab entry",
"path": "/mnt/nfs",
"state": "present",
"src": "192.168.1.100:/export/data",
"fstype": "nfs",
"opts": ["defaults", "vers=4"],
"fstab": true
}
TOML Format:
[[tasks]]
type = "mount"
description = "Mount NFS share with fstab entry"
path = "/mnt/nfs"
state = "present"
src = "192.168.1.100:/export/data"
fstype = "nfs"
opts = ["defaults", "vers=4"]
fstab = true
Unmount a filesystem:
YAML Format:
- type: mount
description: "Unmount temporary mount"
path: /mnt/temp
state: unmounted
JSON Format:
{
"type": "mount",
"description": "Unmount temporary mount",
"path": "/mnt/temp",
"state": "unmounted"
}
TOML Format:
[[tasks]]
type = "mount"
description = "Unmount temporary mount"
path = "/mnt/temp"
state = "unmounted"
Remove fstab entry:
YAML Format:
- type: mount
description: "Remove fstab entry"
path: /mnt/old
state: absent
fstab: true
JSON Format:
{
"type": "mount",
"description": "Remove fstab entry",
"path": "/mnt/old",
"state": "absent",
"fstab": true
}
TOML Format:
[[tasks]]
type = "mount"
description = "Remove fstab entry"
path = "/mnt/old"
state = "absent"
fstab = true
reboot
Description: System reboot task
Required Fields:
-
delay(u32): Delay before reboot (seconds) -
force(bool): Whether to force reboot (don’t wait for clean shutdown) -
test(bool): Test mode (don’t actually reboot)
Optional Fields:
-
description(Option): Optional description of what this task does Human-readable description of the task’s purpose. Used for documentation and can be displayed in logs or reports.
-
msg(Option): Message to display before reboot -
when(Option): Optional condition to determine if the task should run
Examples:
Reboot system with delay:
YAML Format:
- type: reboot
description: "Reboot system after kernel update"
delay: 60
msg: "System will reboot in 60 seconds for kernel update"
force: false
JSON Format:
{
"type": "reboot",
"description": "Reboot system after kernel update",
"delay": 60,
"msg": "System will reboot in 60 seconds for kernel update",
"force": false
}
TOML Format:
[[tasks]]
type = "reboot"
description = "Reboot system after kernel update"
delay = 60
msg = "System will reboot in 60 seconds for kernel update"
force = false
Immediate reboot:
YAML Format:
- type: reboot
description: "Immediate system reboot"
delay: 0
force: true
JSON Format:
{
"type": "reboot",
"description": "Immediate system reboot",
"delay": 0,
"force": true
}
TOML Format:
[[tasks]]
type = "reboot"
description = "Immediate system reboot"
delay = 0
force = true
Test reboot (dry run):
YAML Format:
- type: reboot
description: "Test reboot configuration"
delay: 30
msg: "This is a test reboot - system will not actually reboot"
force: false
test: true
JSON Format:
{
"type": "reboot",
"description": "Test reboot configuration",
"delay": 30,
"msg": "This is a test reboot - system will not actually reboot",
"force": false,
"test": true
}
TOML Format:
[[tasks]]
type = "reboot"
description = "Test reboot configuration"
delay = 30
msg = "This is a test reboot - system will not actually reboot"
force = false
test = true
service
Description: Service management task
Required Fields:
-
name(String): Service name -
state(ServiceState): Service state
Optional Fields:
-
description(Option): Optional description of what this task does Human-readable description of the task’s purpose. Used for documentation and can be displayed in logs or reports.
-
enabled(Option): Whether to enable service at boot -
manager(Option): Service manager to use (auto-detect if not specified) -
when(Option): Optional condition to determine if the task should run
Examples:
Start and enable a service:
YAML Format:
- type: service
description: "Start and enable nginx service"
name: nginx
state: started
enabled: true
JSON Format:
{
"type": "service",
"description": "Start and enable nginx service",
"name": "nginx",
"state": "started",
"enabled": true
}
TOML Format:
[[tasks]]
type = "service"
description = "Start and enable nginx service"
name = "nginx"
state = "started"
enabled = true
shutdown
Description: System shutdown task
Required Fields:
-
delay(u32): Delay before shutdown (seconds) -
force(bool): Whether to force shutdown (don’t wait for clean shutdown) -
test(bool): Test mode (don’t actually shutdown)
Optional Fields:
-
description(Option): Optional description of what this task does Human-readable description of the task’s purpose. Used for documentation and can be displayed in logs or reports.
-
msg(Option): Message to display before shutdown -
when(Option): Optional condition to determine if the task should run
Examples:
Shutdown system with delay:
YAML Format:
- type: shutdown
description: "Shutdown system for maintenance"
delay: 30
msg: "System will shutdown in 30 seconds for maintenance"
force: false
JSON Format:
{
"type": "shutdown",
"description": "Shutdown system for maintenance",
"delay": 30,
"msg": "System will shutdown in 30 seconds for maintenance",
"force": false
}
TOML Format:
[[tasks]]
type = "shutdown"
description = "Shutdown system for maintenance"
delay = 30
msg = "System will shutdown in 30 seconds for maintenance"
force = false
Immediate shutdown:
YAML Format:
- type: shutdown
description: "Immediate system shutdown"
delay: 0
force: true
JSON Format:
{
"type": "shutdown",
"description": "Immediate system shutdown",
"delay": 0,
"force": true
}
TOML Format:
[[tasks]]
type = "shutdown"
description = "Immediate system shutdown"
delay = 0
force = true
Test shutdown (dry run):
YAML Format:
- type: shutdown
description: "Test shutdown configuration"
delay: 60
msg: "This is a test shutdown - system will not actually shutdown"
force: false
test: true
JSON Format:
{
"type": "shutdown",
"description": "Test shutdown configuration",
"delay": 60,
"msg": "This is a test shutdown - system will not actually shutdown",
"force": false,
"test": true
}
TOML Format:
[[tasks]]
type = "shutdown"
description = "Test shutdown configuration"
delay = 60
msg = "This is a test shutdown - system will not actually shutdown"
force = false
test = true
sysctl
Description: Kernel parameter management task
Required Fields:
-
name(String): Parameter name (e.g., “net.ipv4.ip_forward”) -
persist(bool): Whether to persist changes to /etc/sysctl.conf -
reload(bool): Whether to reload immediately -
state(SysctlState): Parameter state -
value(String): Parameter value
Optional Fields:
-
description(Option): Optional description of what this task does Human-readable description of the task’s purpose. Used for documentation and can be displayed in logs or reports.
-
when(Option): Optional condition to determine if the task should run
Examples:
Set kernel parameter:
YAML Format:
- type: sysctl
description: "Enable IP forwarding"
name: net.ipv4.ip_forward
state: present
value: "1"
persist: true
JSON Format:
{
"type": "sysctl",
"description": "Enable IP forwarding",
"name": "net.ipv4.ip_forward",
"state": "present",
"value": "1",
"persist": true
}
TOML Format:
[[tasks]]
type = "sysctl"
description = "Enable IP forwarding"
name = "net.ipv4.ip_forward"
state = "present"
value = "1"
persist = true
Configure network buffer sizes:
YAML Format:
- type: sysctl
description: "Increase network buffer sizes"
name: net.core.rmem_max
state: present
value: "16777216"
persist: true
JSON Format:
{
"type": "sysctl",
"description": "Increase network buffer sizes",
"name": "net.core.rmem_max",
"state": "present",
"value": "16777216",
"persist": true
}
TOML Format:
[[tasks]]
type = "sysctl"
description = "Increase network buffer sizes"
name = "net.core.rmem_max"
state = "present"
value = "16777216"
persist = true
Disable IPv6:
YAML Format:
- type: sysctl
description: "Disable IPv6"
name: net.ipv6.conf.all.disable_ipv6
state: present
value: "1"
persist: true
JSON Format:
{
"type": "sysctl",
"description": "Disable IPv6",
"name": "net.ipv6.conf.all.disable_ipv6",
"state": "present",
"value": "1",
"persist": true
}
TOML Format:
[[tasks]]
type = "sysctl"
description = "Disable IPv6"
name = "net.ipv6.conf.all.disable_ipv6"
state = "present"
value = "1"
persist = true
Remove sysctl parameter:
YAML Format:
- type: sysctl
description: "Remove custom sysctl parameter"
name: net.ipv4.tcp_tw_reuse
state: absent
JSON Format:
{
"type": "sysctl",
"description": "Remove custom sysctl parameter",
"name": "net.ipv4.tcp_tw_reuse",
"state": "absent"
}
TOML Format:
[[tasks]]
type = "sysctl"
description = "Remove custom sysctl parameter"
name = "net.ipv4.tcp_tw_reuse"
state = "absent"
timezone
Description: System timezone management task
Required Fields:
name(String): Timezone name (e.g., “America/New_York”, “UTC”)
Optional Fields:
-
description(Option): Optional description of what this task does Human-readable description of the task’s purpose. Used for documentation and can be displayed in logs or reports.
-
when(Option): Optional condition to determine if the task should run
Examples:
Set system timezone to UTC:
YAML Format:
- type: timezone
description: "Set system timezone to UTC"
name: UTC
JSON Format:
{
"type": "timezone",
"description": "Set system timezone to UTC",
"name": "UTC"
}
TOML Format:
[[tasks]]
type = "timezone"
description = "Set system timezone to UTC"
name = "UTC"
Set timezone to Eastern Time:
YAML Format:
- type: timezone
description: "Set timezone to Eastern Time"
name: America/New_York
JSON Format:
{
"type": "timezone",
"description": "Set timezone to Eastern Time",
"name": "America/New_York"
}
TOML Format:
[[tasks]]
type = "timezone"
description = "Set timezone to Eastern Time"
name = "America/New_York"
Set timezone to Pacific Time:
YAML Format:
- type: timezone
description: "Set timezone to Pacific Time"
name: America/Los_Angeles
JSON Format:
{
"type": "timezone",
"description": "Set timezone to Pacific Time",
"name": "America/Los_Angeles"
}
TOML Format:
[[tasks]]
type = "timezone"
description = "Set timezone to Pacific Time"
name = "America/Los_Angeles"
user
Description: User and group management task
Required Fields:
-
groups(Vec): Additional groups -
name(String): Username -
state(UserState): User state
Optional Fields:
-
create_home(Option): Whether to create home directory -
description(Option): Optional description of what this task does Human-readable description of the task’s purpose. Used for documentation and can be displayed in logs or reports.
-
gid(Option): Group ID -
home(Option): Home directory -
password(Option): Password (hashed) -
shell(Option): Shell -
uid(Option): User ID -
when(Option): Optional condition to determine if the task should run
Examples:
Create a user with basic settings:
YAML Format:
- type: user
description: "Create a web application user"
name: webapp
state: present
uid: 1001
gid: 1001
home: /home/webapp
shell: /bin/bash
create_home: true
JSON Format:
{
"type": "user",
"description": "Create a web application user",
"name": "webapp",
"state": "present",
"uid": 1001,
"gid": 1001,
"home": "/home/webapp",
"shell": "/bin/bash",
"create_home": true
}
TOML Format:
[[tasks]]
type = "user"
description = "Create a web application user"
name = "webapp"
state = "present"
uid = 1001
gid = 1001
home = "/home/webapp"
shell = "/bin/bash"
create_home = true
Create a system user:
YAML Format:
- type: user
description: "Create a system user for nginx"
name: nginx
state: present
uid: 33
gid: 33
home: /var/lib/nginx
shell: /usr/sbin/nologin
create_home: false
JSON Format:
{
"type": "user",
"description": "Create a system user for nginx",
"name": "nginx",
"state": "present",
"uid": 33,
"gid": 33,
"home": "/var/lib/nginx",
"shell": "/usr/sbin/nologin",
"create_home": false
}
TOML Format:
[[tasks]]
type = "user"
description = "Create a system user for nginx"
name = "nginx"
state = "present"
uid = 33
gid = 33
home = "/var/lib/nginx"
shell = "/usr/sbin/nologin"
create_home = false
Remove a user:
YAML Format:
- type: user
description: "Remove the old user account"
name: olduser
state: absent
JSON Format:
{
"type": "user",
"description": "Remove the old user account",
"name": "olduser",
"state": "absent"
}
TOML Format:
[[tasks]]
type = "user"
description = "Remove the old user account"
name = "olduser"
state = "absent"
Utility/Control
assert
Description: Assert task for validating conditions
Required Fields:
-
quiet(bool): Quiet modeDon’t show success messages.
-
that(String): Condition to assertBoolean expression that must evaluate to true.
Optional Fields:
-
description(Option): Optional description of what this task does Human-readable description of the task’s purpose. Used for documentation and can be displayed in logs or reports.
-
fail_msg(Option): Failure message Message to display when assertion fails.
-
success_msg(Option): Success message Message to display when assertion passes.
-
when(Option): Optional condition to determine if the task should run
Examples:
Assert a condition:
YAML Format:
- type: assert
description: "Verify nginx is installed"
that: "'nginx' in installed_packages"
success_msg: "Nginx is properly installed"
fail_msg: "Nginx installation failed"
JSON Format:
[
{
"type": "assert",
"description": "Verify nginx is installed",
"that": "'nginx' in installed_packages",
"success_msg": "Nginx is properly installed",
"fail_msg": "Nginx installation failed"
}
]
TOML Format:
[[tasks]]
type = "assert"
description = "Verify nginx is installed"
that = "'nginx' in installed_packages"
success_msg = "Nginx is properly installed"
fail_msg = "Nginx installation failed"
debug
Description: Debug task for displaying information
Required Fields:
-
msg(String): Message to displayThe message to print. Can be a string or variable reference.
-
verbosity(DebugVerbosity): Verbosity levelControl when this debug message is shown (normal/verbose).
Optional Fields:
-
description(Option): Optional description of what this task does Human-readable description of the task’s purpose. Used for documentation and can be displayed in logs or reports.
-
var(Option): Variable to debug Variable name to display value of. Alternative to msg.
-
when(Option): Optional condition to determine if the task should run
Examples:
Display a debug message:
YAML Format:
- type: debug
description: "Show current configuration"
msg: "Current web_root: {{ web_root }}"
verbosity: normal
JSON Format:
[
{
"type": "debug",
"description": "Show current configuration",
"msg": "Current web_root: {{ web_root }}",
"verbosity": "normal"
}
]
TOML Format:
[[tasks]]
type = "debug"
description = "Show current configuration"
msg = "Current web_root: {{ web_root }}"
verbosity = "normal"
fail
Description: Fail task for forcing execution failure
Required Fields:
-
msg(String): Failure messageMessage to display when failing.
Optional Fields:
-
description(Option): Optional description of what this task does Human-readable description of the task’s purpose. Used for documentation and can be displayed in logs or reports.
-
when(Option): Optional condition to determine if the task should run
Examples:
Fail with a message:
YAML Format:
- type: fail
description: "Stop execution if requirements not met"
msg: "System requirements not satisfied"
when: "not requirements_met"
JSON Format:
[
{
"type": "fail",
"description": "Stop execution if requirements not met",
"msg": "System requirements not satisfied",
"when": "not requirements_met"
}
]
TOML Format:
[[tasks]]
type = "fail"
description = "Stop execution if requirements not met"
msg = "System requirements not satisfied"
when = "not requirements_met"
includerole
Description: Include role task for reusable configurations
Required Fields:
-
defaults(HashMap<String, Value>): Default variablesDefault variables for the role.
-
name(String): Role nameName of the role to include.
-
vars(HashMap<String, Value>): Variable overridesVariables to pass to the role.
Optional Fields:
-
description(Option): Optional description of what this task does Human-readable description of the task’s purpose. Used for documentation and can be displayed in logs or reports.
-
when(Option): Optional condition to determine if the task should run
Examples:
Include a role:
YAML Format:
- type: include_role
description: "Setup web server"
name: webserver
when: "webserver_required"
vars:
port: 8080
defaults:
document_root: /var/www/html
JSON Format:
[
{
"type": "include_role",
"description": "Setup web server",
"name": "webserver",
"when": "webserver_required",
"vars": {
"port": 8080
},
"defaults": {
"document_root": "/var/www/html"
}
}
]
TOML Format:
[[tasks]]
type = "include_role"
description = "Setup web server"
name = "webserver"
when = "webserver_required"
[tasks.vars]
port = 8080
[tasks.defaults]
document_root = "/var/www/html"
includetasks
Description: Include tasks task for modular configurations
Required Fields:
-
file(String): File to includePath to the task file to include.
-
vars(HashMap<String, Value>): Variable overridesVariables to pass to the included tasks.
Optional Fields:
-
description(Option): Optional description of what this task does Human-readable description of the task’s purpose. Used for documentation and can be displayed in logs or reports.
-
when(Option): Optional condition to determine if the task should run
Examples:
Include a task file:
YAML Format:
- type: include_tasks
description: "Include common setup tasks"
file: common/setup.yml
when: "setup_required"
vars:
app_name: myapp
JSON Format:
[
{
"type": "include_tasks",
"description": "Include common setup tasks",
"file": "common/setup.yml",
"when": "setup_required",
"vars": {
"app_name": "myapp"
}
}
]
TOML Format:
[[tasks]]
type = "include_tasks"
description = "Include common setup tasks"
file = "common/setup.yml"
when = "setup_required"
[tasks.vars]
app_name = "myapp"
pause
Description: Pause task to halt execution
Required Fields:
-
minutes(u64): Minutes to pauseDuration to pause execution in minutes.
-
prompt(String): Message to display during pauseMessage shown to user during pause.
-
seconds(u64): Seconds to pauseDuration to pause execution in seconds.
Optional Fields:
-
description(Option): Optional description of what this task does Human-readable description of the task’s purpose. Used for documentation and can be displayed in logs or reports.
-
when(Option): Optional condition to determine if the task should run
Examples:
Pause execution:
YAML Format:
- type: pause
description: "Wait for services to start"
prompt: "Waiting for services to initialize..."
seconds: 30
JSON Format:
[
{
"type": "pause",
"description": "Wait for services to start",
"prompt": "Waiting for services to initialize...",
"seconds": 30
}
]
TOML Format:
[[tasks]]
type = "pause"
description = "Wait for services to start"
prompt = "Waiting for services to initialize..."
seconds = 30
setfact
Description: Set fact task for variable management
Required Fields:
-
cacheable(bool): Cacheable flagWhether this fact can be cached between runs.
-
key(String): Variable nameName of the fact/variable to set.
-
value(Value): Variable valueValue to assign to the variable.
Optional Fields:
-
description(Option): Optional description of what this task does Human-readable description of the task’s purpose. Used for documentation and can be displayed in logs or reports.
-
when(Option): Optional condition to determine if the task should run
Examples:
Set a fact:
YAML Format:
- type: set_fact
description: "Set application version"
key: app_version
value: "1.2.3"
cacheable: true
JSON Format:
[
{
"type": "set_fact",
"description": "Set application version",
"key": "app_version",
"value": "1.2.3",
"cacheable": true
}
]
TOML Format:
[[tasks]]
type = "set_fact"
description = "Set application version"
key = "app_version"
value = "1.2.3"
cacheable = true
waitfor
Description: Wait for task for synchronization
Required Fields:
-
active_connection(bool): Active connection checkPerform active connection attempt instead of just port scan.
-
delay(u64): Delay between checksTime to wait between connectivity checks.
-
state(ConnectionState): Connection state to wait forWhether to wait for connection to be started or stopped.
-
timeout(u64): Timeout in secondsMaximum time to wait for condition.
Optional Fields:
-
description(Option): Optional description of what this task does Human-readable description of the task’s purpose. Used for documentation and can be displayed in logs or reports.
-
host(Option): Host to wait for connectivity Hostname or IP address to check connectivity.
-
path(Option): Path to file to wait for File path to wait for existence or non-existence.
-
port(Option): Port to check Port number to check for connectivity.
-
when(Option): Optional condition to determine if the task should run
Examples:
Wait for port connectivity:
YAML Format:
- type: wait_for
description: "Wait for web server to start"
host: localhost
port: 80
timeout: 60
delay: 5
JSON Format:
[
{
"type": "wait_for",
"description": "Wait for web server to start",
"host": "localhost",
"port": 80,
"timeout": 60,
"delay": 5
}
]
TOML Format:
[[tasks]]
type = "wait_for"
description = "Wait for web server to start"
host = "localhost"
port = 80
timeout = 60
delay = 5
Facts Collectors (facts)
Facts collectors gather system metrics and inventory information. Each collector corresponds to a specific type of system information or metric.
Collector Configuration
All facts collectors support common configuration fields for controlling collection behavior:
name: Collector name (used for metric names)enabled: Whether this collector is enabled (default: true)poll_interval: Poll interval in seconds (how often to collect this metric)labels: Additional labels for this collector
CPU Metrics
cpu
Description: Collect CPU usage, frequency, temperature, and load average metrics
Required Fields:
-
base(BaseCollector): No description available -
collect(CpuCollectOptions): CPU metrics to collect -
name(String): Collector name (used for metric names) -
poll_interval(u64): Poll interval in seconds (how often to collect this metric) -
thresholds(CpuThresholds): Thresholds for alerts
Optional Fields:
-
enabled(bool): Whether this collector is enabled (default: true) -
labels(HashMap<String, String>): Additional labels for this collector
Examples:
Basic CPU metrics collection:
YAML Format:
type: cpu
name: cpu
poll_interval: 30
collect:
usage: true
per_core: true
frequency: true
temperature: true
load_average: true
thresholds:
usage_warning: 80.0
usage_critical: 95.0
temp_warning: 70.0
temp_critical: 85.0
JSON Format:
{
"type": "cpu",
"name": "cpu",
"poll_interval": 30,
"collect": {
"usage": true,
"per_core": true,
"frequency": true,
"temperature": true,
"load_average": true
},
"thresholds": {
"usage_warning": 80.0,
"usage_critical": 95.0,
"temp_warning": 70.0,
"temp_critical": 85.0
}
}
TOML Format:
[[collectors]]
type = "cpu"
name = "cpu"
poll_interval = 30
[collectors.collect]
usage = true
per_core = true
frequency = true
temperature = true
load_average = true
[collectors.thresholds]
usage_warning = 80.0
usage_critical = 95.0
temp_warning = 70.0
temp_critical = 85.0
Output:
cpu_count: 4
usage_percent: 45.2
usage_warning: false
usage_critical: false
cores:
- core_id: 0
usage_percent: 42.1
frequency_mhz: 2400
- core_id: 1
usage_percent: 48.3
frequency_mhz: 2400
frequency_mhz: 2400.0
temperature_celsius: null
temperature_available: false
temp_warning: false
temp_critical: false
load_average:
"1m": 1.25
"5m": 1.15
"15m": 1.08
Command Output
command
Description: Execute custom commands and collect their output as facts
Required Fields:
-
base(BaseCollector): No description available -
command(String): Command to execute -
env(HashMap<String, String>): Environment variables -
format(CommandOutputFormat): Expected output format -
name(String): Collector name (used for metric names) -
poll_interval(u64): Poll interval in seconds (how often to collect this metric)
Optional Fields:
-
cwd(Option): Working directory for command -
enabled(bool): Whether this collector is enabled (default: true) -
labels(HashMap<String, String>): Additional labels for this collector
Examples:
Basic command output collection:
YAML Format:
type: command
name: uptime
command: uptime -p
format: text
labels:
category: system
JSON Format:
{
"type": "command",
"name": "uptime",
"command": "uptime -p",
"format": "text",
"labels": {
"category": "system"
}
}
TOML Format:
[[collectors]]
type = "command"
name = "uptime"
command = "uptime -p"
format = "text"
[collectors.labels]
category = "system"
JSON command output parsing:
YAML Format:
type: command
name: docker_stats
command: docker stats --no-stream --format json
format: json
cwd: /tmp
env:
DOCKER_HOST: unix:///var/run/docker.sock
JSON Format:
{
"type": "command",
"name": "docker_stats",
"command": "docker stats --no-stream --format json",
"format": "json",
"cwd": "/tmp",
"env": {
"DOCKER_HOST": "unix:///var/run/docker.sock"
}
}
TOML Format:
[[collectors]]
type = "command"
name = "docker_stats"
command = "docker stats --no-stream --format json"
format = "json"
cwd = "/tmp"
[collectors.env]
DOCKER_HOST = "unix:///var/run/docker.sock"
Output:
command: "docker stats --no-stream --format json"
exit_code: 0
output:
- container: "web_server"
cpu_percent: "5.2"
memory_usage: "128MiB / 1GiB"
net_io: "1.2kB / 3.4kB"
- container: "database"
cpu_percent: "2.1"
memory_usage: "256MiB / 2GiB"
net_io: "500B / 1.2kB"
labels:
category: monitoring
Key-value command output parsing:
YAML Format:
type: command
name: system_info
command: echo "hostname=$(hostname)\nos_version=$(cat /etc/os-release | grep PRETTY_NAME | cut -d'=' -f2 | tr -d '\"')\nuptime=$(uptime -p)"
format: key_value
labels:
category: system
JSON Format:
{
"type": "command",
"name": "system_info",
"command": "echo \"hostname=$(hostname)\\nos_version=$(cat /etc/os-release | grep PRETTY_NAME | cut -d'=' -f2 | tr -d '\\\"')\\nuptime=$(uptime -p)\"",
"format": "key_value",
"labels": {
"category": "system"
}
}
TOML Format:
[[collectors]]
type = "command"
name = "system_info"
command = "echo \"hostname=$(hostname)\\nos_version=$(cat /etc/os-release | grep PRETTY_NAME | cut -d'=' -f2 | tr -d '\\\"')\\nuptime=$(uptime -p)\""
format = "key_value"
[collectors.labels]
category = "system"
Output:
command: "echo \"hostname=$(hostname)\\nos_version=$(cat /etc/os-release | grep PRETTY_NAME | cut -d'=' -f2 | tr -d '\\\"')\\nuptime=$(uptime -p)\""
exit_code: 0
output:
hostname: "web-server-01"
os_version: "Ubuntu 22.04.3 LTS"
uptime: "up 2 weeks, 3 days, 4 hours"
labels:
category: system
Text command output (default):
YAML Format:
type: command
name: disk_usage
command: df -h /
format: text
labels:
category: storage
JSON Format:
{
"type": "command",
"name": "disk_usage",
"command": "df -h /",
"format": "text",
"labels": {
"category": "storage"
}
}
TOML Format:
[[collectors]]
type = "command"
name = "disk_usage"
command = "df -h /"
format = "text"
[collectors.labels]
category = "storage"
Output:
command: "df -h /"
exit_code: 0
stdout: |
Filesystem Size Used Avail Use% Mounted on
/dev/sda1 50G 15G 33G 31% /
labels:
category: storage
Command with environment variables and working directory:
YAML Format:
type: command
name: custom_script
command: ./check_service.sh
format: json
cwd: /opt/myapp
env:
SERVICE_NAME: myapp
LOG_LEVEL: info
labels:
category: application
JSON Format:
{
"type": "command",
"name": "custom_script",
"command": "./check_service.sh",
"format": "json",
"cwd": "/opt/myapp",
"env": {
"SERVICE_NAME": "myapp",
"LOG_LEVEL": "info"
},
"labels": {
"category": "application"
}
}
TOML Format:
[[collectors]]
type = "command"
name = "custom_script"
command = "./check_service.sh"
format = "json"
cwd = "/opt/myapp"
[collectors.env]
SERVICE_NAME = "myapp"
LOG_LEVEL = "info"
[collectors.labels]
category = "application"
Output:
command: "./check_service.sh"
exit_code: 0
output:
service_status: "running"
uptime_seconds: 3600
version: "1.2.3"
health_checks:
- name: "database"
status: "ok"
- name: "cache"
status: "ok"
labels:
category: application
Disk Metrics
disk
Description: Collect disk space and I/O statistics for mounted filesystems
Required Fields:
-
base(BaseCollector): No description available -
collect(DiskCollectOptions): Disk metrics to collect -
devices(Vec): Disk devices to monitor (empty = all) -
mount_points(Vec): Mount points to monitor (empty = all) -
name(String): Collector name (used for metric names) -
poll_interval(u64): Poll interval in seconds (how often to collect this metric) -
thresholds(DiskThresholds): Thresholds for alerts
Optional Fields:
-
enabled(bool): Whether this collector is enabled (default: true) -
labels(HashMap<String, String>): Additional labels for this collector
Examples:
Basic disk metrics collection:
YAML Format:
type: disk
name: disk
devices: ["/dev/sda", "/dev/sdb"]
mount_points: ["/", "/home", "/var"]
collect:
total: true
used: true
free: true
available: true
percentage: true
io: true
thresholds:
usage_warning: 80.0
usage_critical: 90.0
JSON Format:
{
"type": "disk",
"name": "disk",
"devices": ["/dev/sda", "/dev/sdb"],
"mount_points": ["/", "/home", "/var"],
"collect": {
"total": true,
"used": true,
"free": true,
"available": true,
"percentage": true,
"io": true
},
"thresholds": {
"usage_warning": 80.0,
"usage_critical": 90.0
}
}
TOML Format:
[[collectors]]
type = "disk"
name = "disk"
devices = ["/dev/sda", "/dev/sdb"]
mount_points = ["/", "/home", "/var"]
[collectors.collect]
total = true
used = true
free = true
available = true
percentage = true
io = true
[collectors.thresholds]
usage_warning = 80.0
usage_critical = 90.0
Output:
disks:
- device: "/dev/sda1"
mount_point: "/"
is_removable: false
total_bytes: 536870912000
total_mb: 512000
total_gb: 500
used_bytes: 268435456000
used_mb: 256000
used_gb: 250
free_bytes: 134217728000
free_mb: 128000
free_gb: 125
available_bytes: 107374182400
available_mb: 102400
available_gb: 100
usage_percent: 50
available_percent: 20
disk_pressure: "medium"
usage_warning: false
usage_critical: false
io_supported: false
labels:
storage_type: ssd
Memory Metrics
memory
Description: Collect memory usage statistics including total, used, free, and swap
Required Fields:
-
base(BaseCollector): No description available -
collect(MemoryCollectOptions): Memory metrics to collect -
name(String): Collector name (used for metric names) -
poll_interval(u64): Poll interval in seconds (how often to collect this metric) -
thresholds(MemoryThresholds): Thresholds for alerts
Optional Fields:
-
enabled(bool): Whether this collector is enabled (default: true) -
labels(HashMap<String, String>): Additional labels for this collector
Examples:
Basic memory metrics collection:
YAML Format:
type: memory
name: memory
collect:
total: true
used: true
free: true
available: true
swap: true
percentage: true
thresholds:
usage_warning: 85.0
usage_critical: 95.0
JSON Format:
{
"type": "memory",
"name": "memory",
"collect": {
"total": true,
"used": true,
"free": true,
"available": true,
"swap": true,
"percentage": true
},
"thresholds": {
"usage_warning": 85.0,
"usage_critical": 95.0
}
}
TOML Format:
[[collectors]]
type = "memory"
name = "memory"
[collectors.collect]
total = true
used = true
free = true
available = true
swap = true
percentage = true
[collectors.thresholds]
usage_warning = 85.0
usage_critical = 95.0
Output:
total_bytes: 8589934592
total_mb: 8192
total_gb: 8
used_bytes: 4294967296
used_mb: 4096
used_gb: 4
free_bytes: 2147483648
free_mb: 2048
free_gb: 2
available_bytes: 3221225472
available_mb: 3072
available_gb: 3
usage_percent: 50
available_percent: 37
memory_pressure: "low"
swap_total_bytes: 2147483648
swap_used_bytes: 536870912
swap_free_bytes: 1610612736
swap_total_mb: 2048
swap_used_mb: 512
swap_free_mb: 1536
swap_usage_percent: 25
swap_pressure: "low"
usage_warning: false
usage_critical: false
Network Metrics
network
Description: Collect network interface statistics and status information
Required Fields:
-
base(BaseCollector): No description available -
collect(NetworkCollectOptions): Network metrics to collect -
interfaces(Vec): Network interfaces to monitor (empty = all) -
name(String): Collector name (used for metric names) -
poll_interval(u64): Poll interval in seconds (how often to collect this metric)
Optional Fields:
-
enabled(bool): Whether this collector is enabled (default: true) -
labels(HashMap<String, String>): Additional labels for this collector
Examples:
Basic network metrics collection:
YAML Format:
type: network
name: network
interfaces: ["eth0", "wlan0"]
collect:
bytes: true
packets: true
errors: true
status: true
JSON Format:
{
"type": "network",
"name": "network",
"interfaces": ["eth0", "wlan0"],
"collect": {
"bytes": true,
"packets": true,
"errors": true,
"status": true
}
}
TOML Format:
[[collectors]]
type = "network"
name = "network"
interfaces = ["eth0", "wlan0"]
[collectors.collect]
bytes = true
packets = true
errors = true
status = true
Output:
interfaces:
- name: "eth0"
bytes_received: 1234567890
bytes_transmitted: 987654321
total_bytes: 2222222211
packets_received: 1234567
packets_transmitted: 987654
total_packets: 2222221
errors_on_received: 0
errors_on_transmitted: 0
total_errors: 0
status: "up"
- name: "lo"
bytes_received: 123456789
bytes_transmitted: 123456789
total_bytes: 246913578
packets_received: 123456
packets_transmitted: 123456
total_packets: 246912
errors_on_received: 0
errors_on_transmitted: 0
total_errors: 0
status: "up"
labels:
network_type: corporate
Process Metrics
process
Description: Collect process information and resource usage statistics
Required Fields:
-
base(BaseCollector): No description available -
collect(ProcessCollectOptions): Process metrics to collect -
name(String): Collector name (used for metric names) -
patterns(Vec): Process name patterns to monitor (empty = all processes) -
poll_interval(u64): Poll interval in seconds (how often to collect this metric)
Optional Fields:
-
enabled(bool): Whether this collector is enabled (default: true) -
labels(HashMap<String, String>): Additional labels for this collector
Examples:
Basic process metrics collection:
YAML Format:
type: process
name: process
patterns: ["nginx", "apache", "sshd"]
collect:
count: true
cpu: true
memory: true
status: true
JSON Format:
{
"type": "process",
"name": "process",
"patterns": ["nginx", "apache", "sshd"],
"collect": {
"count": true,
"cpu": true,
"memory": true,
"status": true
}
}
TOML Format:
[[collectors]]
type = "process"
name = "process"
patterns = ["nginx", "apache", "sshd"]
[collectors.collect]
count = true
cpu = true
memory = true
status = true
Output:
total_processes: 150
matched_processes: 3
processes:
- pid: 1234
name: "nginx"
cpu_percent: 5
memory_bytes: 104857600
memory_mb: 100
memory_gb: 0
status: "running"
command: "/usr/sbin/nginx"
parent_pid: 1
- pid: 1235
name: "nginx"
cpu_percent: 3
memory_bytes: 52428800
memory_mb: 50
memory_gb: 0
status: "running"
command: "/usr/sbin/nginx"
parent_pid: 1234
- pid: 5678
name: "apache2"
cpu_percent: 2
memory_bytes: 209715200
memory_mb: 200
memory_gb: 0
status: "sleeping"
command: "/usr/sbin/apache2"
parent_pid: 1
labels:
process_type: web_servers
System Information
system
Description: Collect system information including hostname, OS, kernel, uptime, and architecture
Required Fields:
-
base(BaseCollector): No description available -
collect(SystemCollectOptions): What system information to collect -
name(String): Collector name (used for metric names) -
poll_interval(u64): Poll interval in seconds (how often to collect this metric)
Optional Fields:
-
enabled(bool): Whether this collector is enabled (default: true) -
labels(HashMap<String, String>): Additional labels for this collector
Examples:
Basic system information collection:
YAML Format:
type: system
name: system
collect:
hostname: true
os: true
kernel: true
uptime: true
boot_time: true
arch: true
JSON Format:
{
"type": "system",
"name": "system",
"collect": {
"hostname": true,
"os": true,
"kernel": true,
"uptime": true,
"boot_time": true,
"arch": true
}
}
TOML Format:
[[collectors]]
type = "system"
name = "system"
[collectors.collect]
hostname = true
os = true
kernel = true
uptime = true
boot_time = true
arch = true
Output:
hostname: "myhost.example.com"
os: "linux"
os_family: "unix"
kernel_version: "5.15.0-91-generic"
uptime_seconds: 1234567
boot_time: 1706012345
cpu_arch: "x86_64"
Log Sources/Outputs (logs)
Log processors handle log collection and forwarding. Each processor corresponds to a specific log source or output destination.
Processor Configuration
All log processors support common configuration fields for controlling processing behavior:
enabled: Whether this processor is enabled (default: true)name: Processor name for identification
Log Outputs
console
Description: Output logs to stdout/stderr for debugging
Required Fields:
name(String): Processor name for identification
Optional Fields:
enabled(bool): Whether this processor is enabled (default: true)
Examples:
File log output:
YAML Format:
logs:
- type: file
path: /var/log/app.log
format: json
rotation:
size: 10MB
count: 5
JSON Format:
{
"logs": [
{
"type": "file",
"path": "/var/log/app.log",
"format": "json",
"rotation": {
"size": "10MB",
"count": 5
}
}
]
}
TOML Format:
[[logs]]
type = "file"
path = "/var/log/app.log"
format = "json"
[logs.rotation]
size = "10MB"
count = 5
Console log output:
YAML Format:
logs:
- type: console
format: text
level: info
JSON Format:
{
"logs": [
{
"type": "console",
"format": "text",
"level": "info"
}
]
}
TOML Format:
[[logs]]
type = "console"
format = "text"
level = "info"
Syslog log output:
YAML Format:
logs:
- type: syslog
facility: local0
severity: info
tag: driftless
server: 127.0.0.1:514
protocol: udp
JSON Format:
{
"logs": [
{
"type": "syslog",
"facility": "local0",
"severity": "info",
"tag": "driftless",
"server": "127.0.0.1:514",
"protocol": "udp"
}
]
}
TOML Format:
[[logs]]
type = "syslog"
facility = "local0"
severity = "info"
tag = "driftless"
server = "127.0.0.1:514"
protocol = "udp"
file
Description: Write logs to files with rotation and compression
Required Fields:
name(String): Processor name for identification
Optional Fields:
enabled(bool): Whether this processor is enabled (default: true)
Examples:
File log output:
YAML Format:
logs:
- type: file
path: /var/log/app.log
format: json
rotation:
size: 10MB
count: 5
JSON Format:
{
"logs": [
{
"type": "file",
"path": "/var/log/app.log",
"format": "json",
"rotation": {
"size": "10MB",
"count": 5
}
}
]
}
TOML Format:
[[logs]]
type = "file"
path = "/var/log/app.log"
format = "json"
[logs.rotation]
size = "10MB"
count = 5
Console log output:
YAML Format:
logs:
- type: console
format: text
level: info
JSON Format:
{
"logs": [
{
"type": "console",
"format": "text",
"level": "info"
}
]
}
TOML Format:
[[logs]]
type = "console"
format = "text"
level = "info"
Syslog log output:
YAML Format:
logs:
- type: syslog
facility: local0
severity: info
tag: driftless
server: 127.0.0.1:514
protocol: udp
JSON Format:
{
"logs": [
{
"type": "syslog",
"facility": "local0",
"severity": "info",
"tag": "driftless",
"server": "127.0.0.1:514",
"protocol": "udp"
}
]
}
TOML Format:
[[logs]]
type = "syslog"
facility = "local0"
severity = "info"
tag = "driftless"
server = "127.0.0.1:514"
protocol = "udp"
http
Description: Send logs to HTTP endpoints with authentication and retry
Required Fields:
name(String): Processor name for identification
Optional Fields:
enabled(bool): Whether this processor is enabled (default: true)
Examples:
File log output:
YAML Format:
logs:
- type: file
path: /var/log/app.log
format: json
rotation:
size: 10MB
count: 5
JSON Format:
{
"logs": [
{
"type": "file",
"path": "/var/log/app.log",
"format": "json",
"rotation": {
"size": "10MB",
"count": 5
}
}
]
}
TOML Format:
[[logs]]
type = "file"
path = "/var/log/app.log"
format = "json"
[logs.rotation]
size = "10MB"
count = 5
Console log output:
YAML Format:
logs:
- type: console
format: text
level: info
JSON Format:
{
"logs": [
{
"type": "console",
"format": "text",
"level": "info"
}
]
}
TOML Format:
[[logs]]
type = "console"
format = "text"
level = "info"
Syslog log output:
YAML Format:
logs:
- type: syslog
facility: local0
severity: info
tag: driftless
server: 127.0.0.1:514
protocol: udp
JSON Format:
{
"logs": [
{
"type": "syslog",
"facility": "local0",
"severity": "info",
"tag": "driftless",
"server": "127.0.0.1:514",
"protocol": "udp"
}
]
}
TOML Format:
[[logs]]
type = "syslog"
facility = "local0"
severity = "info"
tag = "driftless"
server = "127.0.0.1:514"
protocol = "udp"
s3
Description: Upload logs to S3 with batching and compression
Required Fields:
name(String): Processor name for identification
Optional Fields:
enabled(bool): Whether this processor is enabled (default: true)
Examples:
File log output:
YAML Format:
logs:
- type: file
path: /var/log/app.log
format: json
rotation:
size: 10MB
count: 5
JSON Format:
{
"logs": [
{
"type": "file",
"path": "/var/log/app.log",
"format": "json",
"rotation": {
"size": "10MB",
"count": 5
}
}
]
}
TOML Format:
[[logs]]
type = "file"
path = "/var/log/app.log"
format = "json"
[logs.rotation]
size = "10MB"
count = 5
Console log output:
YAML Format:
logs:
- type: console
format: text
level: info
JSON Format:
{
"logs": [
{
"type": "console",
"format": "text",
"level": "info"
}
]
}
TOML Format:
[[logs]]
type = "console"
format = "text"
level = "info"
Syslog log output:
YAML Format:
logs:
- type: syslog
facility: local0
severity: info
tag: driftless
server: 127.0.0.1:514
protocol: udp
JSON Format:
{
"logs": [
{
"type": "syslog",
"facility": "local0",
"severity": "info",
"tag": "driftless",
"server": "127.0.0.1:514",
"protocol": "udp"
}
]
}
TOML Format:
[[logs]]
type = "syslog"
facility = "local0"
severity = "info"
tag = "driftless"
server = "127.0.0.1:514"
protocol = "udp"
syslog
Description: Send logs to syslog with RFC compliance
Required Fields:
name(String): Processor name for identification
Optional Fields:
enabled(bool): Whether this processor is enabled (default: true)
Examples:
File log output:
YAML Format:
logs:
- type: file
path: /var/log/app.log
format: json
rotation:
size: 10MB
count: 5
JSON Format:
{
"logs": [
{
"type": "file",
"path": "/var/log/app.log",
"format": "json",
"rotation": {
"size": "10MB",
"count": 5
}
}
]
}
TOML Format:
[[logs]]
type = "file"
path = "/var/log/app.log"
format = "json"
[logs.rotation]
size = "10MB"
count = 5
Console log output:
YAML Format:
logs:
- type: console
format: text
level: info
JSON Format:
{
"logs": [
{
"type": "console",
"format": "text",
"level": "info"
}
]
}
TOML Format:
[[logs]]
type = "console"
format = "text"
level = "info"
Syslog log output:
YAML Format:
logs:
- type: syslog
facility: local0
severity: info
tag: driftless
server: 127.0.0.1:514
protocol: udp
JSON Format:
{
"logs": [
{
"type": "syslog",
"facility": "local0",
"severity": "info",
"tag": "driftless",
"server": "127.0.0.1:514",
"protocol": "udp"
}
]
}
TOML Format:
[[logs]]
type = "syslog"
facility = "local0"
severity = "info"
tag = "driftless"
server = "127.0.0.1:514"
protocol = "udp"
Comprehensive Examples
This section provides complete examples showing how to use Driftless for common configuration management tasks.
Complete Configuration Example
Here’s a complete example showing a typical web server setup:
YAML Format:
vars:
web_user: www-data
web_root: /var/www/html
nginx_config: /etc/nginx/sites-available/default
tasks:
# Install required packages
- type: package
name: nginx
state: present
# Create web directory
- type: file
path: "{{ web_root }}"
state: present
mode: "0755"
owner: "{{ web_user }}"
group: "{{ web_user }}"
# Configure nginx
- type: file
path: "{{ nginx_config }}"
state: present
content: |
server {
listen 80;
root {{ web_root }};
index index.html index.htm;
location / {
try_files $uri $uri/ =404;
}
}
mode: "0644"
owner: root
group: root
# Create index page
- type: file
path: "{{ web_root }}/index.html"
state: present
content: |
<!DOCTYPE html>
<html>
<head><title>Welcome to Driftless</title></head>
<body><h1>Hello from Driftless!</h1></body>
</html>
mode: "0644"
owner: "{{ web_user }}"
group: "{{ web_user }}"
# Start and enable nginx service
- type: service
name: nginx
state: started
enabled: true
JSON Format:
{
"vars": {
"web_user": "www-data",
"web_root": "/var/www/html",
"nginx_config": "/etc/nginx/sites-available/default"
},
"tasks": [
{
"type": "package",
"name": "nginx",
"state": "present"
},
{
"type": "file",
"path": "{{ web_root }}",
"state": "present",
"mode": "0755",
"owner": "{{ web_user }}",
"group": "{{ web_user }}"
},
{
"type": "file",
"path": "{{ nginx_config }}",
"state": "present",
"content": "server {\n listen 80;\n root {{ web_root }};\n index index.html index.htm;\n\n location / {\n try_files $uri $uri/ =404;\n }\n}",
"mode": "0644",
"owner": "root",
"group": "root"
},
{
"type": "file",
"path": "{{ web_root }}/index.html",
"state": "present",
"content": "<!DOCTYPE html>\n<html>\n<head><title>Welcome to Driftless</title></head>\n<body><h1>Hello from Driftless!</h1></body>\n</html>",
"mode": "0644",
"owner": "{{ web_user }}",
"group": "{{ web_user }}"
},
{
"type": "service",
"name": "nginx",
"state": "started",
"enabled": true
}
]
}
TOML Format:
[vars]
web_user = "www-data"
web_root = "/var/www/html"
nginx_config = "/etc/nginx/sites-available/default"
[[tasks]]
type = "package"
name = "nginx"
state = "present"
[[tasks]]
type = "file"
path = "{{ web_root }}"
state = "present"
mode = "0755"
owner = "{{ web_user }}"
group = "{{ web_user }}"
[[tasks]]
type = "file"
path = "{{ nginx_config }}"
state = "present"
content = """
server {
listen 80;
root {{ web_root }};
index index.html index.htm;
location / {
try_files $uri $uri/ =404;
}
}
"""
mode = "0644"
owner = "root"
group = "root"
[[tasks]]
type = "file"
path = "{{ web_root }}/index.html"
state = "present"
content = """
<!DOCTYPE html>
<html>
<head><title>Welcome to Driftless</title></head>
<body><h1>Hello from Driftless!</h1></body>
</html>
"""
mode = "0644"
owner = "{{ web_user }}"
group = "{{ web_user }}"
[[tasks]]
type = "service"
name = "nginx"
state = "started"
enabled = true
Driftless Template Reference
Comprehensive reference for all available Jinja2 template filters and functions in Driftless.
This documentation is auto-generated from the Rust source code.
Overview
Driftless uses Jinja2 templating for dynamic configuration values. Templates support both filters (applied with | syntax) and functions (called directly).
Template Syntax
{{ variable | filter_name(arg1, arg2) }}
{{ function_name(arg1, arg2) }}
Template Filters
Filters transform values in templates using the | syntax.
Encoding/Decoding
b64decode
Decode a base64 encoded string.
Usage:
{{ value | b64decode }}
b64encode
Encode a string using base64 encoding.
Usage:
{{ value | b64encode }}
from_json
Parse a JSON string into a value.
Usage:
{{ value | from_json }}
from_yaml
Parse a YAML string into a value.
Usage:
{{ value | from_yaml }}
mandatory
Fail if the value is undefined, None, or empty. Otherwise return the value.
Usage:
{{ value | mandatory }}
regex_escape
Escape special regex characters in a string.
Usage:
{{ value | regex_escape }}
regex_findall
Find all matches of a regex pattern in a string.
Arguments:
pattern(string): The regex pattern to search for
Usage:
{{ value | regex_findall(pattern) }}
regex_replace
Replace matches of a regex pattern in a string.
Arguments:
pattern(string): The regex pattern to search forreplacement(string): The replacement string
Usage:
{{ value | regex_replace(pattern, replacement) }}
regex_search
Search for a regex pattern in a string and return the first match.
Arguments:
pattern(string): The regex pattern to search for
Usage:
{{ value | regex_search(pattern) }}
to_json
Serialize a value to JSON string.
Arguments:
indent: Number of spaces for indentation (optional)
Usage:
{{ value | to_json(indent) }}
to_nice_json
Convert a value to a nicely formatted JSON string.
Arguments:
indent(integer): Number of spaces for indentation (optional, default: 2)
Usage:
{{ value | to_nice_json(indent) }}
to_nice_yaml
Convert a value to a nicely formatted YAML string.
Arguments:
indent(integer): Number of spaces for indentation (optional, default: 2)
Usage:
{{ value | to_nice_yaml(indent) }}
to_yaml
Serialize a value to YAML string.
Usage:
{{ value | to_yaml }}
urldecode
URL decode a string.
Usage:
{{ value | urldecode }}
urlencode
URL encode a string.
Usage:
{{ value | urlencode }}
List Operations
batch
Batch items in a list into groups of a specified size
Arguments:
size(integer): Size of each batchfill_with(any): Value to fill incomplete batches (optional)
Usage:
{{ value | batch(size, fill_with) }}
first
Get the first item from a list
Usage:
{{ value | first }}
join
Join a list of strings with a separator
Arguments:
separator(string): String to join with (optional, default: empty string)
Usage:
{{ value | join(separator) }}
last
Get the last item from a list
Usage:
{{ value | last }}
reverse
Reverse the order of items in a list
Usage:
{{ value | reverse }}
sort
Sort items in a list
Arguments:
reverse(boolean): Sort in reverse order (optional, default: false)case_sensitive(boolean): Case sensitive sorting for strings (optional, default: true)
Usage:
{{ value | sort(reverse, case_sensitive) }}
unique
Remove duplicate items from a list
Usage:
{{ value | unique }}
List/Dict Operations
combine
Combine multiple dictionaries into one. Later dictionaries override earlier ones.
Arguments:
dictionaries: Additional dictionaries to combine
Usage:
{{ value | combine(dictionaries) }}
dict2items
Convert a dictionary to a list of items with ‘key’ and ‘value’ fields.
Usage:
{{ value | dict2items }}
dictsort
Sort a dictionary by keys or values
Arguments:
case_sensitive(boolean): Whether sorting is case sensitive (optional, default: false)by(string): Sort by ‘key’ or ‘value’ (optional, default: ‘key’)reverse(boolean): Reverse the sort order (optional, default: false)
Usage:
{{ value | dictsort(case_sensitive, by, reverse) }}
flatten
Flatten a nested list structure.
Usage:
{{ value | flatten }}
items2dict
Convert a list of items with ‘key’ and ‘value’ fields back to a dictionary.
Usage:
{{ value | items2dict }}
map
Apply an attribute or filter to each item in a list.
Arguments:
attribute: Attribute name or filter to apply
Usage:
{{ value | map(attribute) }}
reject
Reject items from a list that match a test.
Arguments:
test(Test to apply (supports): defined, truthy, undefined, none, falsy, equalto, match, search, version_compare)arg: Optional argument for tests that require it (e.g., value for equalto, regex for match)
Usage:
{{ value | reject(test, arg) }}
select
Select items from a list that match a test.
Arguments:
test(Test to apply (supports): defined, truthy, undefined, none, falsy, equalto, match, search, version_compare)arg: Optional argument for tests that require it (e.g., value for equalto, regex for match)
Usage:
{{ value | select(test, arg) }}
slice
Slice a list into sublists of a specified size
Arguments:
size(integer): Size of each slice
Usage:
{{ value | slice(size) }}
zip
Zip multiple lists together into a list of tuples.
Arguments:
lists: Additional lists to zip with
Usage:
{{ value | zip(lists) }}
Math/Logic Operations
abs
Return the absolute value of a number
Usage:
{{ value | abs }}
bool
Convert value to boolean
Usage:
{{ value | bool }}
float
Convert a value to a floating-point number
Arguments:
default(number): Default value if conversion fails (optional)
Usage:
{{ value | float(default) }}
int
Convert a value to an integer
Arguments:
default(integer): Default value if conversion fails (optional, default: 0)base(integer): Base for string conversion (optional, default: 10)
Usage:
{{ value | int(default, base) }}
log
Return the logarithm of a number
Arguments:
base(number): The base of the logarithm (optional, default: e)
Usage:
{{ value | log(base) }}
pow
Return a number raised to a power
Arguments:
exp(number): The exponent
Usage:
{{ value | pow(exp) }}
random
Return a random number, optionally within a specified range
Arguments:
start(integer): The starting value of the range (optional)end(integer): The ending value of the range (optional)
Usage:
{{ value | random(start, end) }}
range
Generate a list of numbers in a range
Arguments:
start(integer): Start of the range (optional, default: 0)end(integer): End of the range (required if start is provided)step(integer): Step size (optional, default: 1)
Usage:
{{ value | range(start, end, step) }}
round
Round a number to a given precision (default 0 decimal places)
Arguments:
precision(integer): The number of decimal places to round to (optional, default: 0)
Usage:
{{ value | round(precision) }}
sqrt
Return the square root of a number
Usage:
{{ value | sqrt }}
ternary
Return one of two values based on condition (true_val if condition is true, false_val if false)
Arguments:
true_val(any): The value to return if the condition is truefalse_val(any): The value to return if the condition is false
Usage:
{{ value | ternary(true_val, false_val) }}
Path Operations
basename
Return the basename of a path
Usage:
{{ value | basename }}
dirname
Return the directory name of a path
Usage:
{{ value | dirname }}
expanduser
Expand a path containing a tilde (~) to the user’s home directory.
Usage:
{{ value | expanduser }}
realpath
Return the canonical absolute path, resolving symlinks and relative components.
Usage:
{{ value | realpath }}
String Operations
capitalize
Capitalize the first character of a string
Usage:
{{ value | capitalize }}
center
Center a string in a field of given width
Arguments:
width(integer): Width of the fieldfillchar(string): Character to fill with (optional, default: space)
Usage:
{{ value | center(width, fillchar) }}
comment
Wrap a string in comment markers
Arguments:
style(string): Comment style (optional, default: #)
Usage:
{{ value | comment(style) }}
format
Format a string with placeholders
Arguments:
args(variadic): Arguments to format into the string
Usage:
{{ value | format(args) }}
indent
Indent each line of a string
Arguments:
width(integer): Number of spaces to indent (optional, default: 0)indentfirst(boolean): Whether to indent the first line (optional, default: false)
Usage:
{{ value | indent(width, indentfirst) }}
ljust
Left-justify a string in a field of given width
Arguments:
width(integer): Width of the fieldfillchar(string): Character to fill with (optional, default: space)
Usage:
{{ value | ljust(width, fillchar) }}
lower
Convert a string to lowercase
Usage:
{{ value | lower }}
lstrip
Remove leading whitespace from a string
Usage:
{{ value | lstrip }}
rjust
Right-justify a string in a field of given width
Arguments:
width(integer): Width of the fieldfillchar(string): Character to fill with (optional, default: space)
Usage:
{{ value | rjust(width, fillchar) }}
rstrip
Remove trailing whitespace from a string
Usage:
{{ value | rstrip }}
splitlines
Split a string into a list of lines
Usage:
{{ value | splitlines }}
strip
Remove leading and trailing whitespace from a string
Usage:
{{ value | strip }}
title
Convert a string to title case
Usage:
{{ value | title }}
truncate
Truncate a string to a specified length
Arguments:
length(integer): Maximum length of the resulting stringkillwords(boolean): If true, truncate at character boundary; if false, try to truncate at word boundary (optional, default: false)end(string): String to append when truncation occurs (optional, default: “…”)
Usage:
{{ value | truncate(50) }}
{{ value | truncate(20, "...") }}
{{ value | truncate(30, true, "[truncated]") }}
upper
Convert a string to uppercase
Usage:
{{ value | upper }}
wordcount
Count the number of words in a string
Usage:
{{ value | wordcount }}
wordwrap
Wrap a string to a specified width
Arguments:
width(integer): Maximum width of each line (optional, default: 79)
Usage:
{{ value | wordwrap(width) }}
String/List Operations
length
Return the length of a string, list, or object
Usage:
{{ value | length }}
Template Functions
Functions perform operations and return values in templates.
Generator Functions
random
Generate random numbers.
Arguments:
max(int): The maximum value (exclusive) or minimum value if second arg providedmax(int): The maximum value (exclusive)
Usage:
{{ random(max, max) }}
range
Generate a sequence of numbers.
Arguments:
end_or_start(int): The end value (exclusive) for single arg, or start value for multiple argsend(int): The end value (exclusive)step(int): The step value (optional, defaults to 1)
Usage:
{{ range(end_or_start, end, step) }}
Lookup Functions
lookup
Look up values from various sources (env, file, etc.)
Arguments:
type(string): The lookup type (env, file, template, pipe)key(string): The key/path/command to look up
Usage:
{{ lookup('env', 'HOME') }}
{{ lookup('env', 'USER') }}
Path Operations
basename
Return the basename of a path
Arguments:
path(string): The path to extract the basename from
Usage:
{{ basename('/path/to/file.txt') }}
{{ basename(path_variable) }}
dirname
Return the directory name of a path
Arguments:
path(string): The path to extract the directory name from
Usage:
{{ dirname('/path/to/file.txt') }}
{{ dirname(path_variable) }}
Utility Functions
ansible_date_time
Return current date/time information in Ansible format
Usage:
{{ ansible_date_time() }}
ansible_managed
Return a string indicating the file is managed by Ansible
Usage:
{{ ansible_managed() }}
expandvars
Expand environment variables in a string
Arguments:
string(string): The string containing environment variables to expand
Usage:
{{ expandvars(string) }}
hash
Return the hash of a string using the specified algorithm
Arguments:
value(string): The string to hashalgorithm(string): The hash algorithm (md5, sha1, sha256, sha384, sha512)
Usage:
{{ hash(value, algorithm) }}
include_vars
Include variables from files (YAML, JSON, etc.)
Arguments:
file(string): Path to the file containing variables
Usage:
{{ include_vars(file) }}
length
Return the length of a string, array, or object
Arguments:
value(any): The value to get the length of (string, array, or object)
Usage:
{{ length('hello') }}
{{ length(items) }}
{{ length(my_object) }}
query
Query various sources for data (inventory, files, etc.)
Arguments:
query_type(string): The type of query (inventory_hostnames, file, etc.)query_args(any): Arguments for the query
Usage:
{{ query(query_type, query_args) }}
timestamp
Return the current timestamp
Arguments:
format(string): Optional strftime format string (default: ISO 8601)
Usage:
{{ timestamp(format) }}
uuid
Generate a random UUID4
Usage:
{{ uuid() }}
Examples
# Using filters
path: "/home/{{ username | lower }}"
config: "{{ app_name | upper }}.conf"
truncated: "{{ long_text | truncate(50) }}"
# Using functions
length: "{{ length(items) }}"
basename: "{{ basename('/path/to/file.txt') }}"
env_var: "{{ lookup('env', 'HOME') }}"
User Guide
This section contains guides and tutorials for using Driftless.
Agent Mode
Driftless agent mode provides continuous configuration enforcement, metrics collection, and log forwarding for infrastructure automation and monitoring.
Features
- Continuous Configuration Enforcement: Automatically applies configuration changes at specified intervals
- Metrics Collection: Gathers system metrics and exposes them via Prometheus-compatible endpoint
- Log Forwarding: Collects and forwards logs to various destinations (S3, HTTP, syslog, etc.)
- Configuration Drift Detection: Monitors for configuration drift and automatically corrects it
- Resource Monitoring: Built-in resource usage monitoring with configurable limits
- Circuit Breaker Pattern: Graceful degradation when individual components fail
- Hot Configuration Reload: Automatically reloads configuration when files change
Configuration Directory Resolution
Driftless automatically detects configuration directories in this order:
- System-wide:
/etc/driftless/(highest priority - for system administrators) - User-specific:
~/.config/driftless/(fallback - for individual users)
You can also explicitly specify a configuration directory using the --config CLI flag.
Quick Start
User Configuration
- Create agent configuration:
# ~/.config/driftless/config/agent.yml
config_dir: "~/.config/driftless/config"
apply_interval: 300 # 5 minutes
facts_interval: 60 # 1 minute
apply_dry_run: false
metrics_port: 8000
enabled: true
System-wide Configuration
- Create agent configuration:
# /etc/driftless/agent.yml
config_dir: "/etc/driftless/config"
apply_interval: 300 # 5 minutes
facts_interval: 60 # 1 minute
apply_dry_run: false
metrics_port: 8000
enabled: true
- Create apply configuration:
# ~/.config/driftless/config/apply.yml
tasks:
- type: package
name: nginx
state: present
- type: service
name: nginx
state: started
enabled: true
- Create facts configuration:
# ~/.config/driftless/config/facts.yml
collectors:
- type: cpu
interval: 60
- type: memory
interval: 60
- type: disk
interval: 300
paths: ["/", "/var", "/tmp"]
exporters:
- type: prometheus
port: 8000
- Create logs configuration:
# ~/.config/driftless/config/logs.yml
sources:
- type: file
name: nginx-access
paths: ["/var/log/nginx/access.log"]
parser: common
- type: file
name: system-auth
paths: ["/var/log/auth.log"]
parser: syslog
outputs:
- type: s3
name: log-archive
bucket: my-logs-bucket
region: us-east-1
prefix: logs/
compression:
algorithm: gzip
- type: http
name: elk-forwarder
url: http://elasticsearch:9200/_bulk
method: POST
batch:
max_size: 100
max_age: 60
- Start the agent:
driftless agent
Configuration Options
Agent Configuration (agent.yml)
# Directory containing configuration files to monitor
config_dir: "~/.config/driftless/config"
# Interval for running apply tasks (seconds)
apply_interval: 300
# Interval for collecting facts (seconds)
facts_interval: 60
# Whether to run apply tasks in dry-run mode
apply_dry_run: false
# Port for Prometheus metrics endpoint
metrics_port: 8000
# Whether agent is enabled
enabled: true
Apply Configuration (apply.yml)
Standard apply configuration with additional agent-specific options:
# Apply tasks to run continuously
tasks:
- type: package
name: nginx
state: present
- type: service
name: nginx
state: started
enabled: true
# Agent-specific settings
agent:
# Maximum execution time per apply cycle (seconds)
timeout: 300
# Continue on individual task failures
continue_on_error: true
Facts Configuration (facts.yml)
# Facts collectors to run
collectors:
- type: cpu
interval: 60
enabled: true
- type: memory
interval: 60
enabled: true
- type: disk
interval: 300
paths: ["/", "/var", "/tmp"]
enabled: true
- type: network
interval: 60
interfaces: ["eth0", "wlan0"]
enabled: true
# Exporters for collected facts
exporters:
- type: prometheus
port: 8000
path: "/metrics"
enabled: true
- type: s3
bucket: my-metrics-bucket
region: us-east-1
prefix: metrics/
interval: 300
enabled: true
Logs Configuration (logs.yml)
# Log sources to monitor
sources:
- type: file
name: nginx-access
paths: ["/var/log/nginx/access.log", "/var/log/nginx/error.log"]
parser: common
multiline:
pattern: '^\d{4}-\d{2}-\d{2}'
negate: false
enabled: true
- type: file
name: application
paths: ["/var/log/application/*.log"]
parser: json
enabled: true
# Log outputs for forwarding
outputs:
- type: s3
name: log-archive
bucket: my-logs-bucket
region: us-east-1
prefix: logs/
compression:
algorithm: gzip
batch:
max_size: 1000
max_age: 300
enabled: true
- type: http
name: elk-forwarder
url: http://elasticsearch:9200/_bulk
method: POST
headers:
Content-Type: "application/x-ndjson"
auth:
type: basic
username: elastic
password: "{{ elasticsearch_password }}"
batch:
max_size: 100
max_age: 60
enabled: true
- type: syslog
name: local-syslog
facility: user
severity: info
enabled: true
- type: file
name: local-archive
path: "/var/log/driftless/archive"
rotation:
max_size: "100MB"
max_age: "7d"
max_files: 10
enabled: true
Monitoring and Metrics
The agent exposes Prometheus-compatible metrics at http://localhost:8000/metrics:
# HELP driftless_agent_apply_execution_count_total Total number of apply executions
# TYPE driftless_agent_apply_execution_count_total counter
driftless_agent_apply_execution_count_total 42
# HELP driftless_agent_apply_execution_duration_seconds Duration of apply executions
# TYPE driftless_agent_apply_execution_duration_seconds histogram
driftless_agent_apply_execution_duration_seconds_bucket{le="0.1"} 0
driftless_agent_apply_execution_duration_seconds_bucket{le="0.5"} 2
driftless_agent_apply_execution_duration_seconds_bucket{le="1"} 5
driftless_agent_apply_execution_duration_seconds_bucket{le="5"} 40
driftless_agent_apply_execution_duration_seconds_bucket{le="10"} 42
driftless_agent_apply_execution_duration_seconds_bucket{le="+Inf"} 42
# HELP driftless_agent_facts_collection_count_total Total number of facts collections
# TYPE driftless_agent_facts_collection_count_total counter
driftless_agent_facts_collection_count_total 120
# HELP driftless_agent_logs_processed_entries_total Total number of log entries processed
# TYPE driftless_agent_logs_processed_entries_total counter
driftless_agent_logs_processed_entries_total 15432
Operational Commands
# Start agent in foreground
driftless agent
# Start agent with specific config directory
driftless agent --config-dir /etc/driftless/config
# Start agent with custom log level
RUST_LOG=debug driftless agent
# Check agent status (when running)
curl http://localhost:8000/status
# View agent metrics
curl http://localhost:8000/metrics
# Stop agent gracefully (send SIGTERM)
kill $(pgrep driftless)
Common Use Cases
Infrastructure Monitoring Agent
# agent.yml
config_dir: "/etc/driftless/config"
apply_interval: 3600 # 1 hour
facts_interval: 60 # 1 minute
metrics_port: 9090
# facts.yml
collectors:
- type: system
- type: cpu
- type: memory
- type: disk
- type: network
exporters:
- type: prometheus
port: 9090
Log Aggregation Agent
# agent.yml
config_dir: "/etc/driftless/config"
facts_interval: 300 # 5 minutes (reduced frequency)
# logs.yml
sources:
- type: file
name: all-logs
paths: ["/var/log/**/*.log"]
parser: auto
outputs:
- type: s3
bucket: centralized-logs
region: us-east-1
compression: {algorithm: gzip}
Configuration Enforcement Agent
# agent.yml
config_dir: "/etc/driftless/config"
apply_interval: 600 # 10 minutes
apply_dry_run: false
# apply.yml
tasks:
- type: package
name: security-tools
state: present
- type: file
path: "/etc/security/policy.conf"
state: present
content: |
# Security policy enforced by driftless
enforce_password_policy = true
Troubleshooting
Agent Won’t Start
# Check configuration syntax
driftless agent --validate-config
# Check file permissions
ls -la ~/.config/driftless/config/
# Check logs
RUST_LOG=debug driftless agent 2>&1 | head -50
High Resource Usage
# Check metrics endpoint
curl http://localhost:8000/metrics | grep driftless_agent
# Reduce collection intervals
# Edit agent.yml and restart agent
Configuration Not Reloading
# Check file permissions
ls -la ~/.config/driftless/config/agent.yml
# Verify configuration syntax
driftless agent --validate-config
# Check agent logs for reload messages
Agent Deployment Guide
This guide covers deploying the Driftless agent in production environments.
Prerequisites
- Linux system (Ubuntu 18.04+, CentOS 7+, or equivalent)
- Rust toolchain (for building from source)
- Systemd (for service management)
- Network access for metrics and log shipping
Installation
Option 1: Install from Source
# Clone the repository
git clone https://github.com/driftless-hq/driftless.git
cd driftless
# Build the binary
cargo build --release
# Install the binary
sudo cp target/release/driftless /usr/local/bin/
Option 2: Download Pre-built Binary
# Download the latest release
wget https://github.com/driftless-hq/driftless/releases/latest/download/driftless-linux-x64.tar.gz
tar -xzf driftless-linux-x64.tar.gz
sudo mv driftless /usr/local/bin/
Configuration
Create a configuration directory and files:
sudo mkdir -p /etc/driftless
sudo chown -R driftless:driftless /etc/driftless
Agent Configuration (/etc/driftless/agent.yml)
# Agent operational settings
apply_interval: 300 # Apply tasks every 5 minutes
facts_interval: 60 # Collect facts every minute
apply_dry_run: false # Enable for production
metrics_port: 8000 # Prometheus metrics port
enabled: true # Enable agent operation
# Resource limits
max_memory_mb: 512 # Memory limit
max_cpu_percent: 50 # CPU limit
# Circuit breaker settings
circuit_breaker_threshold: 5 # Failures before opening circuit
circuit_breaker_timeout: 300 # Seconds to wait before retry
# Logging
log_level: info
log_file: /var/log/driftless/agent.log
Facts Configuration (/etc/driftless/facts.yml)
collectors:
- name: system
enabled: true
interval: 60
- name: network
enabled: true
interval: 300
- name: disk
enabled: true
interval: 60
Apply Tasks Configuration (/etc/driftless/apply.yml)
tasks:
- name: ensure-ntp
package:
name: ntp
state: present
- name: configure-firewall
ufw:
state: enabled
rules:
- port: 22
proto: tcp
Logs Configuration (/etc/driftless/logs.yml)
sources:
- name: system-logs
type: file
path: /var/log/syslog
parser: syslog
destinations:
- name: elk-stack
type: elasticsearch
url: https://elk.example.com:9200
index: driftless-%{+YYYY.MM.dd}
Systemd Service Setup
Create the systemd service file:
sudo tee /etc/systemd/system/driftless-agent.service > /dev/null <<EOF
[Unit]
Description=Driftless Configuration Management Agent
After=network.target
Wants=network.target
[Service]
Type=simple
User=driftless
Group=driftless
ExecStart=/usr/local/bin/driftless --config /etc/driftless agent
Restart=always
RestartSec=10
Environment=RUST_LOG=info
# Security settings
NoNewPrivileges=yes
PrivateTmp=yes
ProtectSystem=strict
ReadWritePaths=/etc/driftless /var/log/driftless
ProtectHome=yes
# Resource limits
MemoryLimit=512M
CPUQuota=50%
[Install]
WantedBy=multi-user.target
EOF
Create the driftless user:
sudo useradd --system --shell /bin/false --home /var/lib/driftless --create-home driftless
sudo mkdir -p /var/log/driftless
sudo chown driftless:driftless /var/log/driftless
Enable and start the service:
sudo systemctl daemon-reload
sudo systemctl enable driftless-agent
sudo systemctl start driftless-agent
sudo systemctl status driftless-agent
Docker Deployment
Dockerfile
FROM rust:1.92-slim as builder
WORKDIR /app
COPY . .
RUN cargo build --release
FROM debian:bookworm-slim
RUN apt-get update && apt-get install -y ca-certificates && rm -rf /var/lib/apt/lists/*
COPY --from=builder /app/target/release/driftless /usr/local/bin/driftless
USER nobody
ENTRYPOINT ["/usr/local/bin/driftless"]
Docker Compose
version: '3.8'
services:
driftless-agent:
build: .
volumes:
- ./config:/etc/driftless:ro
- ./logs:/var/log/driftless
ports:
- "8000:8000"
restart: unless-stopped
environment:
- RUST_LOG=info
command: ["--config", "/etc/driftless", "agent"]
Kubernetes Deployment
Deployment Manifest
apiVersion: apps/v1
kind: Deployment
metadata:
name: driftless-agent
spec:
replicas: 1
selector:
matchLabels:
app: driftless-agent
template:
metadata:
labels:
app: driftless-agent
spec:
containers:
- name: driftless-agent
image: your-registry/driftless:latest
args: ["--config", "/etc/driftless", "agent"]
ports:
- containerPort: 8000
name: metrics
volumeMounts:
- name: config
mountPath: /etc/driftless
readOnly: true
- name: logs
mountPath: /var/log/driftless
resources:
limits:
memory: 512Mi
cpu: "500m"
requests:
memory: 256Mi
cpu: "100m"
livenessProbe:
httpGet:
path: /health
port: 8000
initialDelaySeconds: 30
periodSeconds: 10
readinessProbe:
httpGet:
path: /ready
port: 8000
initialDelaySeconds: 5
periodSeconds: 5
volumes:
- name: config
configMap:
name: driftless-config
- name: logs
emptyDir: {}
ConfigMap
apiVersion: v1
kind: ConfigMap
metadata:
name: driftless-config
data:
agent.yml: |
apply_interval: 300
facts_interval: 60
apply_dry_run: false
metrics_port: 8000
enabled: true
facts.yml: |
collectors:
- name: system
enabled: true
apply.yml: |
tasks: []
logs.yml: |
sources: []
destinations: []
Monitoring Setup
Prometheus Configuration
Add to your prometheus.yml:
scrape_configs:
- job_name: 'driftless-agent'
static_configs:
- targets: ['localhost:8000']
scrape_interval: 30s
Grafana Dashboard
Create panels for:
- Agent uptime and status
- Task execution success/failure rates
- Facts collection metrics
- Memory and CPU usage
- Circuit breaker status
Security Considerations
- Run as non-root user: Always use a dedicated user account
- Minimal permissions: Only grant necessary file system access
- Network isolation: Restrict network access as needed
- Configuration encryption: Store sensitive config in secure locations
- Log rotation: Configure logrotate for agent logs
- Updates: Regularly update the agent binary for security patches
Troubleshooting
See the Operations Guide for detailed troubleshooting procedures.
Agent Operations Guide
This guide covers operational procedures for managing the Driftless agent in production.
Service Management
Checking Agent Status
# Systemd service status
sudo systemctl status driftless-agent
# Check if agent is responding
curl http://localhost:8000/health
# View agent logs
sudo journalctl -u driftless-agent -f
# or
tail -f /var/log/driftless/agent.log
Starting/Stopping the Agent
# Start agent
sudo systemctl start driftless-agent
# Stop agent
sudo systemctl stop driftless-agent
# Restart agent
sudo systemctl restart driftless-agent
# Reload configuration (if supported)
sudo systemctl reload driftless-agent
Manual Agent Execution
For testing or troubleshooting:
# Run agent in dry-run mode
driftless --config /etc/driftless agent --dry-run
# Run with custom intervals
driftless --config /etc/driftless agent --apply-interval 60 --facts-interval 30
# Run once and exit
driftless --config /etc/driftless agent --single-run
Configuration Management
Hot Configuration Reload
The agent supports hot reloading of configuration files:
# Edit configuration
sudo vi /etc/driftless/agent.yml
# The agent will automatically detect changes and reload
# Check logs for confirmation
sudo journalctl -u driftless-agent -n 20
Configuration Validation
# Validate configuration syntax
driftless --config /etc/driftless agent --validate-config
# Test configuration with dry run
driftless --config /etc/driftless agent --dry-run --apply-interval 1
Backup and Restore
# Backup configuration
sudo cp -r /etc/driftless /etc/driftless.backup.$(date +%Y%m%d)
# Restore configuration
sudo cp -r /etc/driftless.backup.20231201 /etc/driftless
sudo systemctl restart driftless-agent
Monitoring and Metrics
Prometheus Metrics
The agent exposes metrics at http://localhost:8000/metrics:
# Available metrics
curl http://localhost:8000/metrics
# Key metrics to monitor:
# - driftless_agent_uptime_seconds
# - driftless_tasks_executed_total
# - driftless_facts_collected_total
# - driftless_config_reload_total
# - driftless_circuit_breaker_state
# - driftless_memory_usage_bytes
# - driftless_cpu_usage_percent
Health Checks
# Overall health
curl http://localhost:8000/health
# Readiness check
curl http://localhost:8000/ready
# Deep health check (includes subsystem status)
curl http://localhost:8000/health/deep
Log Analysis
# Search for errors
grep "ERROR" /var/log/driftless/agent.log
# Check recent activity
tail -n 50 /var/log/driftless/agent.log
# Monitor task execution
grep "task.*executed" /var/log/driftless/agent.log | tail -10
Troubleshooting
Agent Won’t Start
-
Check configuration syntax:
driftless --config /etc/driftless agent --validate-config -
Check file permissions:
ls -la /etc/driftless/ sudo chown -R driftless:driftless /etc/driftless/ -
Check systemd logs:
sudo journalctl -u driftless-agent -n 50 --no-pager -
Test manual execution:
sudo -u driftless driftless --config /etc/driftless agent --dry-run
Tasks Not Executing
-
Check agent status:
curl http://localhost:8000/health -
Verify configuration:
cat /etc/driftless/apply.yml -
Check task execution logs:
grep "apply.*task" /var/log/driftless/agent.log -
Test task manually:
driftless --config /etc/driftless apply --dry-run
High Resource Usage
-
Check current metrics:
curl http://localhost:8000/metrics | grep -E "(memory|cpu)" -
Adjust resource limits:
# In agent.yml max_memory_mb: 256 max_cpu_percent: 25 -
Reduce collection intervals:
# In agent.yml apply_interval: 600 # 10 minutes facts_interval: 300 # 5 minutes
Circuit Breaker Tripped
-
Check circuit breaker status:
curl http://localhost:8000/metrics | grep circuit_breaker -
Review recent failures:
grep "circuit.*open" /var/log/driftless/agent.log -
Investigate root cause:
- Check network connectivity
- Verify external service availability
- Review task configurations
-
Manual reset (if needed):
sudo systemctl restart driftless-agent
Configuration Not Reloading
-
Check file permissions:
ls -la /etc/driftless/ -
Verify file watcher:
grep "config.*reload" /var/log/driftless/agent.log -
Manual reload:
sudo systemctl reload driftless-agent # or sudo systemctl restart driftless-agent
Performance Tuning
Memory Optimization
# agent.yml
max_memory_mb: 256
circuit_breaker_threshold: 3
CPU Optimization
# agent.yml
max_cpu_percent: 25
apply_interval: 600
facts_interval: 300
Network Optimization
# agent.yml
# Reduce metrics collection frequency
metrics_interval: 60
# Configure timeouts
http_timeout: 30
Log Management
Log Rotation
Create /etc/logrotate.d/driftless:
/var/log/driftless/*.log {
daily
rotate 7
compress
delaycompress
missingok
notifempty
create 644 driftless driftless
postrotate
systemctl reload driftless-agent
endscript
}
Log Levels
Adjust log verbosity:
# agent.yml
log_level: warn # error, warn, info, debug, trace
Or via environment:
export RUST_LOG=driftless=debug
sudo systemctl restart driftless-agent
Backup and Recovery
Configuration Backup
#!/bin/bash
# Daily backup script
BACKUP_DIR="/var/backups/driftless"
mkdir -p $BACKUP_DIR
tar -czf $BACKUP_DIR/config-$(date +%Y%m%d).tar.gz -C /etc driftless
find $BACKUP_DIR -name "config-*.tar.gz" -mtime +30 -delete
Full Recovery
# Stop agent
sudo systemctl stop driftless-agent
# Restore configuration
sudo tar -xzf /var/backups/driftless/config-20231201.tar.gz -C /etc
# Restore logs (if needed)
# sudo tar -xzf /var/backups/driftless/logs-20231201.tar.gz -C /var/log
# Start agent
sudo systemctl start driftless-agent
Security Maintenance
Regular Updates
# Check for updates
curl -s https://api.github.com/repos/driftless-hq/driftless/releases/latest | grep "browser_download_url.*linux"
# Update binary
sudo systemctl stop driftless-agent
sudo cp new-driftless-binary /usr/local/bin/driftless
sudo systemctl start driftless-agent
Security Audits
# Check running processes
ps aux | grep driftless
# Verify file permissions
find /etc/driftless -type f -exec ls -la {} \;
# Check network connections
ss -tlnp | grep :8000
Emergency Procedures
Emergency Stop
# Immediate stop
sudo systemctl stop driftless-agent
# Kill all processes
sudo pkill -9 driftless
# Disable service
sudo systemctl disable driftless-agent
Emergency Recovery
# Restore from backup
sudo tar -xzf /var/backups/driftless/emergency-backup.tar.gz -C /
# Verify configuration
driftless --config /etc/driftless agent --validate-config
# Start in dry-run mode first
driftless --config /etc/driftless agent --dry-run
# Enable and start service
sudo systemctl enable driftless-agent
sudo systemctl start driftless-agent
Support and Escalation
- Check documentation: This operations guide and README.md
- Review logs: Complete log analysis as described above
- Community support: GitHub issues and discussions
- Commercial support: Contact your support provider
For critical issues, gather:
- Agent version:
driftless --version - Configuration files (sanitized)
- Recent logs:
journalctl -u driftless-agent -n 100 - System information:
uname -a,free -h,df -h
Driftless Agent Configuration Examples
This directory contains example configurations for common Driftless agent use cases. Each example includes all necessary configuration files (agent.yml, apply.yml, facts.yml, logs.yml) for a complete setup.
Available Examples
Basic Monitoring Agent
Files: agent-basic-monitoring.yml, facts-basic-monitoring.yml
Purpose: Minimal agent setup for collecting essential system metrics
Use Case: Simple infrastructure monitoring without complex logging or configuration enforcement
Features:
- CPU, memory, disk, and network monitoring
- Prometheus metrics endpoint
- 1-minute collection intervals
agent-basic-monitoring.yml
# Basic Infrastructure Monitoring Agent Configuration
# This example shows a minimal agent setup for monitoring system metrics
# Agent Configuration
config_dir: "~/.config/driftless/config"
apply_interval: 3600 # Check configuration every hour
facts_interval: 60 # Collect metrics every minute
apply_dry_run: false
metrics_port: 8000
enabled: true
facts-basic-monitoring.yml
# Facts Configuration for Basic Monitoring
# Collects essential system metrics and exposes them via Prometheus
collectors:
# CPU usage and load averages
- type: cpu
interval: 60
enabled: true
# Memory usage statistics
- type: memory
interval: 60
enabled: true
# Disk usage for key mount points
- type: disk
interval: 300 # 5 minutes
paths: ["/", "/var", "/tmp"]
enabled: true
# Network interface statistics
- type: network
interval: 60
interfaces: ["eth0", "wlan0"] # Adjust based on your system
enabled: true
# System information (hostname, OS, etc.)
- type: system
interval: 3600 # 1 hour
enabled: true
# Export collected metrics
exporters:
- type: prometheus
port: 8000
path: "/metrics"
enabled: true
Log Aggregation Agent
Files: agent-log-aggregation.yml, logs-comprehensive.yml
Purpose: Comprehensive log collection from multiple sources with forwarding to various destinations
Use Case: Centralized logging infrastructure
Features:
- Multi-source log collection (nginx, system, application, Docker)
- Multiple output destinations (S3, ELK stack, syslog, local files)
- Compression and batching for efficiency
agent-log-aggregation.yml
# Comprehensive Log Aggregation Agent Configuration
# This example shows how to collect logs from multiple sources and forward them to various destinations
# Agent Configuration
config_dir: "~/.config/driftless/config"
apply_interval: 3600 # 1 hour (reduced since we're focusing on logs)
facts_interval: 300 # 5 minutes (reduced frequency)
apply_dry_run: false
metrics_port: 8001 # Different port to avoid conflicts
enabled: true
logs-comprehensive.yml
# Comprehensive Log Aggregation Configuration
# Collects logs from multiple sources and forwards to various destinations
sources:
# Web server access logs
- type: file
name: nginx-access
paths:
- "/var/log/nginx/access.log"
- "/var/log/nginx/access.log.1"
parser: common # Apache Common Log Format
multiline:
pattern: '^\d{1,3}\.\d{1,3}\.\d{1,3}\.\d{1,3}' # IP address at start
negate: false
enabled: true
# Web server error logs
- type: file
name: nginx-error
paths: ["/var/log/nginx/error.log"]
parser: nginx_error
enabled: true
# System authentication logs
- type: file
name: system-auth
paths: ["/var/log/auth.log", "/var/log/secure"]
parser: syslog
enabled: true
# System messages
- type: file
name: system-messages
paths: ["/var/log/messages", "/var/log/syslog"]
parser: syslog
enabled: true
# Application logs (JSON format)
- type: file
name: application-json
paths: ["/var/log/application/*.log"]
parser: json
multiline:
pattern: '^{\s*"timestamp"' # JSON objects starting with timestamp
negate: false
enabled: true
# Docker container logs
- type: file
name: docker-logs
paths: ["/var/lib/docker/containers/*/*.log"]
parser: json
enabled: true
outputs:
# Long-term S3 storage with compression
- type: s3
name: s3-long-term-storage
bucket: my-company-logs
region: us-east-1
prefix: logs/{{ year }}/{{ month }}/{{ day }}/
compression:
algorithm: gzip
level: 6
batch:
max_size: 1000 # 1000 log entries per batch
max_age: 300 # 5 minutes max batch age
max_bytes: 5242880 # 5MB max batch size
enabled: true
# Real-time ELK stack forwarding
- type: http
name: elasticsearch-bulk
url: http://elasticsearch:9200/_bulk
method: POST
headers:
Content-Type: "application/x-ndjson"
auth:
type: basic
username: elastic
password: "{{ elasticsearch_password }}"
batch:
max_size: 100
max_age: 30
timeout: 30
enabled: true
# Local syslog for immediate visibility
- type: syslog
name: local-syslog
facility: local0
severity: info
tag: driftless
enabled: true
# Local file archive for backup
- type: file
name: local-archive
path: "/var/log/driftless/archive"
rotation:
max_size: "100MB"
max_age: "30d"
max_files: 30
compress: true
enabled: true
Configuration Enforcement Agent
Files: agent-config-enforcement.yml, apply-config-enforcement.yml
Purpose: Continuous enforcement of system configuration and security policies
Use Case: Compliance and security hardening
Features:
- Security package installation
- Firewall configuration
- SSH hardening
- System security settings
- Automated security monitoring
agent-config-enforcement.yml
# Configuration Enforcement Agent Configuration
# This example focuses on continuous configuration enforcement and compliance
# Agent Configuration
config_dir: "~/.config/driftless/config"
apply_interval: 600 # Check configuration every 10 minutes
facts_interval: 300 # 5 minutes (reduced since focus is on config)
apply_dry_run: false # Actually enforce configuration
metrics_port: 8002
enabled: true
apply-config-enforcement.yml
# Configuration Enforcement Tasks
# Ensures system configuration remains compliant with policy
tasks:
# Security packages
- type: package
name: fail2ban
state: present
- type: package
name: ufw
state: present
- type: package
name: auditd
state: present
# Security services
- type: service
name: fail2ban
state: started
enabled: true
- type: service
name: ufw
state: started
enabled: true
- type: service
name: auditd
state: started
enabled: true
# Firewall configuration
- type: ufw
rule: allow
port: "22"
proto: tcp
from_ip: "10.0.0.0/8"
- type: ufw
rule: allow
port: "80"
proto: tcp
- type: ufw
rule: allow
port: "443"
proto: tcp
- type: ufw
rule: default
policy: deny
direction: incoming
# SSH hardening
- type: file
path: "/etc/ssh/sshd_config"
state: present
content: |
# SSH configuration enforced by driftless
PermitRootLogin no
PasswordAuthentication no
X11Forwarding no
MaxAuthTries 3
ClientAliveInterval 60
ClientAliveCountMax 3
mode: "0644"
backup: true
- type: service
name: sshd
state: restarted
# System hardening
- type: sysctl
name: "net.ipv4.tcp_syncookies"
value: "1"
state: present
- type: sysctl
name: "net.ipv4.conf.all.rp_filter"
value: "1"
state: present
- type: sysctl
name: "kernel.randomize_va_space"
value: "2"
state: present
# Log rotation for security logs
- type: logrotate
name: auth-log
path: "/var/log/auth.log"
options:
- "weekly"
- "rotate 12"
- "compress"
- "missingok"
- "notifempty"
# Cron job for log analysis
- type: cron
name: security-log-check
minute: "0"
hour: "*/4"
job: "/usr/local/bin/security-check.sh"
state: present
user: root
# Ensure security monitoring scripts exist
- type: file
path: "/usr/local/bin/security-check.sh"
state: present
content: |
#!/bin/bash
# Security log analysis script
echo "Running security checks at $(date)"
# Check for suspicious login attempts
if grep -q "Failed password" /var/log/auth.log; then
echo "ALERT: Failed login attempts detected"
# Send alert here
fi
mode: "0755"
owner: root
group: root
Production-Ready Agent
Files: agent-production.yml, facts-production.yml, logs-production.yml, apply-production.yml
Purpose: Complete production deployment combining monitoring, logging, and configuration enforcement
Use Case: Enterprise production environments
Features:
- All monitoring capabilities
- Enterprise logging with redundancy
- Comprehensive configuration enforcement
- Security hardening
- Backup and disaster recovery
- TLS encryption for log forwarding
agent-production.yml
# Production-Ready Agent Configuration
# Complete example combining monitoring, logging, and configuration enforcement
# Agent Configuration
config_dir: "/etc/driftless/config"
apply_interval: 1800 # 30 minutes
facts_interval: 60 # 1 minute
apply_dry_run: false
metrics_port: 9090 # Standard Prometheus port
enabled: true
# Environment variables for secrets
secrets:
aws_access_key_id: "{{ AWS_ACCESS_KEY_ID }}"
aws_secret_access_key: "{{ AWS_SECRET_ACCESS_KEY }}"
elasticsearch_password: "{{ ELASTICSEARCH_PASSWORD }}"
facts-production.yml
# Production Facts Configuration
# Comprehensive monitoring setup for production environments
collectors:
# CPU monitoring
- type: cpu
interval: 30
enabled: true
# Memory monitoring
- type: memory
interval: 30
enabled: true
# Disk monitoring for all mount points
- type: disk
interval: 300
paths: ["/", "/var", "/tmp", "/opt", "/home"]
enabled: true
# Network monitoring
- type: network
interval: 30
interfaces: ["eth0", "bond0"] # Production interfaces
enabled: true
# Process monitoring
- type: process
interval: 60
processes: ["nginx", "postgres", "redis", "application"]
enabled: true
# System information
- type: system
interval: 3600
enabled: true
# Multiple exporters for redundancy
exporters:
# Primary Prometheus endpoint
- type: prometheus
port: 9090
path: "/metrics"
enabled: true
# Backup S3 storage
- type: s3
bucket: "{{ metrics_bucket }}"
region: "{{ aws_region }}"
prefix: metrics/{{ hostname }}/{{ year }}/{{ month }}/
interval: 300
enabled: true
# Local file backup
- type: file
path: "/var/log/driftless/metrics"
rotation:
max_size: "50MB"
max_age: "7d"
max_files: 7
enabled: true
logs-production.yml
# Production Logs Configuration
# Enterprise-grade log collection and forwarding
sources:
# Application logs
- type: file
name: application
paths: ["/var/log/application/*.log"]
parser: json
multiline:
pattern: '^{\s*"timestamp"'
negate: false
enabled: true
# Web server logs
- type: file
name: nginx
paths:
- "/var/log/nginx/access.log"
- "/var/log/nginx/error.log"
parser: nginx_combined
enabled: true
# Database logs
- type: file
name: postgres
paths: ["/var/log/postgresql/*.log"]
parser: syslog
enabled: true
# System logs
- type: file
name: system
paths:
- "/var/log/syslog"
- "/var/log/auth.log"
- "/var/log/kern.log"
parser: syslog
enabled: true
# Security logs
- type: file
name: audit
paths: ["/var/log/audit/audit.log"]
parser: audit
enabled: true
outputs:
# Primary ELK stack
- type: http
name: elasticsearch
url: https://elasticsearch.prod.company.com:9200/_bulk
method: POST
headers:
Content-Type: "application/x-ndjson"
auth:
type: basic
username: "{{ elasticsearch_user }}"
password: "{{ elasticsearch_password }}"
tls:
ca_cert: "/etc/ssl/certs/ca.pem"
client_cert: "/etc/ssl/certs/client.pem"
client_key: "/etc/ssl/private/client.key"
batch:
max_size: 500
max_age: 60
max_bytes: 10485760 # 10MB
retry:
max_attempts: 3
backoff: exponential
enabled: true
# Backup S3 storage
- type: s3
name: s3-backup
bucket: "{{ logs_bucket }}"
region: "{{ aws_region }}"
prefix: logs/{{ hostname }}/{{ year }}/{{ month }}/{{ day }}/
compression:
algorithm: gzip
level: 9
batch:
max_size: 2000
max_age: 600
max_bytes: 67108864 # 64MB
enabled: true
# Local syslog relay
- type: syslog
name: local-relay
facility: local1
severity: info
host: "logrelay.prod.company.com"
port: 514
protocol: tcp
enabled: true
# Emergency local storage
- type: file
name: emergency-buffer
path: "/var/log/driftless/emergency"
rotation:
max_size: "1GB"
max_age: "1d"
max_files: 7
compress: true
enabled: true
apply-production.yml
# Production Configuration Enforcement
# Critical system configuration that must be maintained
tasks:
# Core system packages
- type: package
name: monitoring-tools
state: present
- type: package
name: logrotate
state: present
- type: package
name: unattended-upgrades
state: present
# Monitoring services
- type: service
name: monitoring-agent
state: started
enabled: true
- type: service
name: logrotate
state: started
enabled: true
# Security hardening
- type: sysctl
name: "net.ipv4.tcp_syncookies"
value: "1"
- type: sysctl
name: "net.ipv4.conf.all.rp_filter"
value: "1"
- type: sysctl
name: "kernel.randomize_va_space"
value: "2"
- type: sysctl
name: "net.ipv4.ip_forward"
value: "0"
# SSH hardening
- type: file
path: "/etc/ssh/sshd_config.d/driftless.conf"
state: present
content: |
# Production SSH hardening enforced by driftless
PermitRootLogin no
PasswordAuthentication no
MaxAuthTries 3
ClientAliveInterval 60
ClientAliveCountMax 3
AllowTcpForwarding no
X11Forwarding no
PermitTTY yes
PrintLastLog yes
mode: "0644"
- type: service
name: ssh
state: restarted
# Log rotation policies
- type: logrotate
name: application-logs
path: "/var/log/application/*.log"
options:
- "daily"
- "rotate 30"
- "compress"
- "missingok"
- "notifempty"
- "create 0644 {{ application_user }} {{ application_group }}"
# Backup configuration
- type: cron
name: daily-backup
minute: "0"
hour: "2"
job: "/usr/local/bin/backup.sh"
state: present
user: backup
# Monitoring configuration
- type: file
path: "/etc/monitoring/agent.yml"
state: present
content: |
# Monitoring agent configuration
server: monitoring.prod.company.com
port: 443
tls: true
api_key: "{{ monitoring_api_key }}"
mode: "0600"
owner: monitoring
group: monitoring
# NTP synchronization
- type: service
name: systemd-timesyncd
state: started
enabled: true
- type: file
path: "/etc/systemd/timesyncd.conf"
state: present
content: |
[Time]
NTP=ntp1.prod.company.com ntp2.prod.company.com
FallbackNTP=pool.ntp.org
mode: "0644"
- type: service
name: systemd-timesyncd
state: restarted
# Disk monitoring
- type: cron
name: disk-usage-alert
minute: "*/15"
job: "/usr/local/bin/check-disk-usage.sh"
state: present
user: root
# Security updates
- type: file
path: "/etc/apt/apt.conf.d/50unattended-upgrades"
state: present
content: |
Unattended-Upgrade::Allowed-Origins {
"${distro_id}:${distro_codename}-security";
};
Unattended-Upgrade::Automatic-Reboot "true";
Unattended-Upgrade::Automatic-Reboot-Time "02:00";
mode: "0644"
Configuration File Structure
Each example follows the standard Driftless configuration structure:
~/.config/driftless/
├── config/
│ ├── agent.yml # Agent behavior configuration
│ ├── apply.yml # Configuration operations to enforce
│ ├── facts.yml # Metrics collection configuration
│ └── logs.yml # Log collection and forwarding configuration
└── data/ # Runtime data (created automatically)
Getting Started
- Choose an example that matches your use case
- Copy the configuration files to
~/.config/driftless/config/ - Edit the configurations to match your environment
- Set any required environment variables or secrets
- Start the agent:
driftless agent
Environment Variables
Many examples use template variables that should be set as environment variables:
export AWS_ACCESS_KEY_ID="your-key"
export AWS_SECRET_ACCESS_KEY="your-secret"
export ELASTICSEARCH_PASSWORD="your-password"
export MONITORING_API_KEY="your-api-key"
Customization
These examples are starting points. Customize them for your specific needs:
- Adjust collection intervals based on your monitoring requirements
- Modify file paths to match your system layout
- Configure appropriate authentication for external services
- Add additional tasks, collectors, or log sources as needed
Security Considerations
- Store sensitive configuration in environment variables, not in config files
- Use TLS encryption for log forwarding in production
- Implement proper access controls for metrics endpoints
- Regularly rotate credentials and API keys
- Monitor agent resource usage and adjust limits as needed
Troubleshooting
If the agent fails to start:
- Validate configuration syntax:
driftless agent --validate-config - Check file permissions on configuration files
- Verify network connectivity to external services
- Review agent logs with
RUST_LOG=debug driftless agent
For issues with specific components:
- Apply: Check task definitions and system permissions
- Facts: Verify collector configurations and system access
- Logs: Check file paths, permissions, and output destinations
Templates
Driftless provides a powerful templating system based on Jinja2, allowing you to create dynamic configuration files with variables, filters, and built-in functions.
Enhanced Template System Examples
This document demonstrates the robust template system with variables, filters, and built-in functions.
Basic Variable Substitution
vars:
user_name: "alice"
user_count: 42
config_path: "/etc/myapp/config.yml"
tasks:
- type: debug
msg: "Hello {{ user_name }}! There are {{ user_count }} users."
- type: file
path: "{{ config_path | dirname }}/backup"
state: present
Filters
vars:
app_name: "my-application"
file_path: "/home/user/data.txt"
description: "hello world example"
tasks:
- type: debug
msg: "App name in uppercase: {{ app_name | upper }}"
- type: debug
msg: "Filename: {{ file_path | basename }}"
- type: debug
msg: "Directory: {{ file_path | dirname }}"
- type: debug
msg: "Name length: {{ app_name | length }}"
- type: debug
msg: "Capitalized: {{ description | capitalize }}"
- type: debug
msg: "Truncated: {{ description | truncate(12) }}"
- type: debug
msg: "Truncated with custom end: {{ description | truncate(12, false, '...') }}"
Built-in Functions
vars:
server_list: ["web1", "web2", "db1"]
config_file: "/etc/nginx/sites-available/default"
tasks:
- type: debug
msg: "Server count: {{ length(server_list) }}"
- type: debug
msg: "Config filename: {{ basename(config_file) }}"
- type: debug
msg: "Config directory: {{ dirname(config_file) }}"
Complex Condition Expressions
vars:
deploy_env: "production"
server_count: 3
enable_ssl: true
regions: ["us-east", "us-west", "eu-central"]
tasks:
- type: debug
msg: "Production deployment with SSL"
when: "{{ deploy_env }} == production and {{ enable_ssl }}"
- type: debug
msg: "Multi-region setup"
when: "{{ length(regions) }} > 1"
- type: debug
msg: "Large cluster"
when: "{{ server_count }} >= 5"
- type: debug
msg: "US region included"
when: "us-east in {{ regions }}"
- type: fail
msg: "Cannot deploy to production without SSL"
when: "{{ deploy_env }} == production and not {{ enable_ssl }}"
Variable Definition Checks
tasks:
- type: assert
that: "deploy_env is defined"
success_msg: "Deployment environment is configured"
- type: fail
msg: "Required variable 'api_key' is not set"
when: "api_key is not defined"
- type: set_fact
key: "cluster_size"
value: "{{ server_count | int }}"
- type: debug
msg: "Using {{ cluster_size }} servers"
when: "cluster_size is defined"
## Registered Variables Usage
Registered variables allow you to capture the output of one task and use it in subsequent tasks. This is particularly useful for conditional execution or dynamic configuration based on command results or API responses.
### Command Output Capture
Capture `stdout` and use it in a template.
```yaml
tasks:
- type: command
description: "Get uptime"
command: uptime -p
register: system_uptime
- type: debug
msg: "The system has been up for: {{ system_uptime.stdout }}"
Conditional Execution based on Command Result
Use the exit code (rc) to decide whether to run another task.
tasks:
- type: command
description: "Check if a configuration file is valid"
command: myapp --check-config /etc/myapp.conf
register: config_check
ignore_errors: true
- type: command
description: "Apply configuration if valid"
command: myapp --apply-config /etc/myapp.conf
when: "{{ config_check.rc }} == 0"
- type: fail
msg: "Configuration check failed with error: {{ config_check.stderr }}"
when: "{{ config_check.rc }} != 0"
API Result Usage
Capture a response from a web service and use its status or content.
tasks:
- type: uri
description: "Check service health"
url: https://api.service.local/health
return_content: true
register: api_health
- type: debug
msg: "Service is healthy. Response status: {{ api_health.status }}"
when: "{{ api_health.status }} == 200"
- type: debug
msg: "Service body: {{ api_health.content }}"
when: "api_health.content is defined"
## Template Inheritance and Composition
```yaml
vars:
app_name: "webapp"
base_path: "/opt/{{ app_name }}"
version: "1.2.3"
tasks:
# Use include_tasks for modular task composition
- type: include_tasks
file: "tasks/setup-directories.yml"
vars:
app_base: "{{ base_path }}"
app_version: "{{ version }}"
- type: include_tasks
file: "tasks/deploy-{{ deploy_env }}.yml"
when: "{{ deploy_env }} is defined"
Advanced Expressions
vars:
ports: [80, 443, 8080]
server_names: ["web", "api", "admin"]
memory_gb: 16
tasks:
- type: debug
msg: "Server has {{ memory_gb }} GB RAM"
when: "{{ memory_gb | int }} >= 8"
- type: debug
msg: "High availability setup"
when: "{{ length(server_names) }} > 1 and {{ memory_gb }} >= 32"
- type: set_fact
key: "is_production"
value: "{{ deploy_env == 'production' }}"
- type: assert
that: "{{ is_production }} or {{ deploy_env }} == 'staging'"
success_msg: "Valid deployment environment"
Environment Variables and Env Files
Environment variables are accessible through the env fact and env files are automatically loaded.
Direct Access via env
tasks:
- type: debug
msg: "User: {{ env.USER }}"
- type: debug
msg: "Home directory: {{ env.HOME }}"
- type: debug
msg: "Path: {{ env.PATH }}"
Env File Support
Create ~/.config/driftless/env (user) or /etc/driftless/env (system-wide) with:
API_KEY=your-secret-key
DATABASE_PASSWORD=secret-password
APP_ENV=production
Then access in templates:
vars:
app_env: "{{ env.APP_ENV }}"
api_key: "{{ env.API_KEY }}"
tasks:
- type: debug
msg: "Running in {{ app_env }} environment"
- type: file
path: "/etc/myapp/config.yml"
content: |
api_key: "{{ api_key }}"
database:
password: "{{ env.DATABASE_PASSWORD }}"
- type: fail
msg: "API_KEY environment variable not set"
when: "api_key == '' or api_key is not defined"
Notes
- Environment variables are loaded from the system,
/etc/driftless/env(system-wide), and~/.config/driftless/env(user) - Access via
env.VARIABLE_NAMEsyntax - Variables defined in YAML
vars:section are processed at load time - Env file variables override system environment variables
Built-in Facts
The system provides built-in facts:
tasks:
- type: debug
msg: "Running Driftless version {{ driftless_version }}"
- type: debug
msg: "OS Family: {{ os_family }}"
- type: debug
msg: "Architecture: {{ driftless_architecture }}"
- type: debug
msg: "Distribution: {{ distribution }}"
Complete Example
---
vars:
app_name: "my-web-app"
deploy_env: "production"
server_count: 3
enable_ssl: true
base_domain: "example.com"
config_dir: "/etc/{{ app_name }}"
tasks:
# Validation
- type: assert
that: "{{ deploy_env }} in ['development', 'staging', 'production']"
success_msg: "Valid deployment environment: {{ deploy_env }}"
- type: assert
that: "{{ server_count | int }} > 0"
success_msg: "Server count is valid: {{ server_count }}"
# Setup
- type: set_fact
key: "full_domain"
value: "{{ app_name }}.{{ base_domain }}"
- type: set_fact
key: "is_https"
value: "{{ enable_ssl and deploy_env == 'production' }}"
# Directory creation with templating
- type: directory
path: "{{ config_dir }}"
state: present
mode: "0755"
- type: directory
path: "{{ config_dir }}/ssl"
state: present
mode: "0700"
when: "{{ is_https }}"
# Configuration file
- type: file
path: "{{ config_dir }}/app.yml"
state: present
content: |
app:
name: {{ app_name | upper }}
environment: {{ deploy_env }}
servers: {{ server_count }}
domain: {{ full_domain }}
ssl_enabled: {{ is_https }}
config_path: {{ config_dir }}
# Deployment
- type: debug
msg: "Deploying {{ app_name }} to {{ deploy_env }} environment"
- type: debug
msg: "Using {{ server_count }} servers for high availability"
when: "{{ server_count | int }} > 1"
- type: debug
msg: "SSL will be configured for {{ full_domain }}"
when: "{{ is_https }}"
# Include environment-specific tasks
- type: include_tasks
file: "tasks/deploy-{{ deploy_env }}.yml"
vars:
app_config: "{{ config_dir }}/app.yml"
ssl_enabled: "{{ is_https }}"
Using Jinja2 Template Files with Task Chaining
Driftless supports rendering external Jinja2 template files (.j2 extension) using the template task. This allows for complex templating with access to all variables, including outputs from previous tasks, demonstrating the system’s ability to chain tasks together.
Example: Dynamic Configuration Based on System Facts
First, create a template file nginx.conf.j2 in your configuration directory:
# nginx.conf.j2
server {
listen {{ nginx_port }};
server_name {{ server_name }};
root {{ web_root }};
index index.html;
# Dynamic upstream based on registered command output
upstream app_backend {
{% for server in app_servers %}
server {{ server }};
{% endfor %}
}
location / {
proxy_pass http://app_backend;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
}
# SSL configuration if enabled
{% if enable_ssl %}
listen 443 ssl;
ssl_certificate {{ ssl_cert }};
ssl_certificate_key {{ ssl_key }};
{% endif %}
# Custom error page with system info
error_page 500 502 503 504 /50x.html;
location = /50x.html {
root {{ web_root }};
}
# System uptime from previous command
add_header X-System-Uptime "{{ system_uptime.stdout | trim }}" always;
}
Then, use the following task configuration to render it:
vars:
nginx_port: 80
server_name: "myapp.example.com"
web_root: "/var/www/html"
enable_ssl: false
ssl_cert: "/etc/ssl/certs/myapp.crt"
ssl_key: "/etc/ssl/private/myapp.key"
tasks:
# First task: Gather system information
- type: command
description: "Get system uptime"
command: "uptime -p"
register: system_uptime
# Second task: Get list of application servers (simulated)
- type: command
description: "Get list of backend servers"
command: "echo -e '192.168.1.10:8080\n192.168.1.11:8080'"
register: backend_servers
# Third task: Process the server list into a variable
- type: set_fact
key: "app_servers"
value: "{{ backend_servers.stdout_lines }}"
# Fourth task: Render the template using outputs from previous tasks
- type: template
description: "Render nginx configuration with dynamic backend"
src: "nginx.conf.j2"
dest: "/etc/nginx/sites-available/myapp"
state: present
vars:
nginx_port: "{{ nginx_port }}"
server_name: "{{ server_name }}"
web_root: "{{ web_root }}"
enable_ssl: "{{ enable_ssl }}"
ssl_cert: "{{ ssl_cert }}"
ssl_key: "{{ ssl_key }}"
# app_servers and system_uptime are automatically available
# Fifth task: Enable the site
- type: file
path: "/etc/nginx/sites-enabled/myapp"
src: "/etc/nginx/sites-available/myapp"
state: link
# Sixth task: Reload nginx
- type: service
name: nginx
state: reloaded
This example demonstrates:
- Task Chaining: The output of the
commandtask (registered assystem_uptime) is used in the template. - Variable Processing: The
set_facttask processes the command output into a list (app_servers) used in the template loop. - Complex Templating: The
.j2file includes conditionals ({% if %}), loops ({% for %}), and variable access. - Full Integration: Templates have access to all variables, including those set by previous tasks.
The rendered output would include the actual system uptime in the HTTP header and dynamically configure the upstream servers based on the command output.