Files
skelly/docs/TESTING.md

4.8 KiB

Tests Directory

This directory contains test scripts and utilities for validating various systems and components in the Skelly project.

Overview

The tests/ directory is designed to house:

  • System validation scripts
  • Component testing utilities
  • Integration tests
  • Performance benchmarks
  • Debugging tools

Current Test Files

test_logging.gd

Comprehensive test script for the DebugManager logging system.

Features:

  • Tests all log levels (TRACE, DEBUG, INFO, WARN, ERROR, FATAL)
  • Validates log level filtering functionality
  • Tests category-based logging organization
  • Verifies debug mode integration
  • Demonstrates proper logging usage patterns

Usage:

# Option 1: Add as temporary autoload
# In project.godot, add: tests/test_logging.gd

# Option 2: Instantiate in a scene
var test_script = preload("res://tests/test_logging.gd").new()
add_child(test_script)

# Option 3: Run directly from editor
# Open the script and run the scene containing it

Expected Output: The script will output formatted log messages demonstrating:

  • Proper timestamp formatting
  • Log level filtering behavior
  • Category organization
  • Debug mode dependency for TRACE/DEBUG levels

Adding New Tests

When creating new test files, follow these conventions:

File Naming

  • Use descriptive names starting with test_
  • Example: test_audio_manager.gd, test_scene_transitions.gd

File Structure

extends Node

# Brief description of what this test validates

func _ready():
    # Wait for system initialization if needed
    await get_tree().process_frame
    run_tests()

func run_tests():
    print("=== Starting [System Name] Tests ===")

    # Individual test functions
    test_basic_functionality()
    test_edge_cases()
    test_error_conditions()

    print("=== [System Name] Tests Complete ===")

func test_basic_functionality():
    print("\\n--- Test: Basic Functionality ---")
    # Test implementation

func test_edge_cases():
    print("\\n--- Test: Edge Cases ---")
    # Edge case testing

func test_error_conditions():
    print("\\n--- Test: Error Conditions ---")
    # Error condition testing

Testing Guidelines

  1. Independence: Each test should be self-contained and not depend on other tests
  2. Cleanup: Restore original state after testing (settings, debug modes, etc.)
  3. Clear Output: Use descriptive print statements to show test progress
  4. Error Handling: Test both success and failure conditions
  5. Documentation: Include comments explaining complex test scenarios

Integration with Main Project

  • Temporary Usage: Test files are meant to be added temporarily during development
  • Not in Production: These files should not be included in release builds
  • Autoload Testing: Add to autoloads temporarily for automatic execution
  • Manual Testing: Run individually when testing specific components

Test Categories

System Tests

Test core autoload managers and global systems:

  • test_logging.gd - DebugManager logging system
  • Future: test_settings.gd - SettingsManager functionality
  • Future: test_audio.gd - AudioManager functionality
  • Future: test_scene_management.gd - GameManager transitions

Component Tests

Test individual game components:

  • Future: test_match3.gd - Match-3 gameplay mechanics
  • Future: test_tile_system.gd - Tile behavior and interactions
  • Future: test_ui_components.gd - Menu and UI functionality

Integration Tests

Test system interactions and workflows:

  • Future: test_game_flow.gd - Complete game session flow
  • Future: test_debug_system.gd - Debug UI integration
  • Future: test_localization.gd - Language switching and translations

Running Tests

During Development

  1. Copy or symlink the test file to your scene
  2. Add as a child node or autoload temporarily
  3. Run the project and observe console output
  4. Remove from project when testing is complete

Automated Testing

While Godot doesn't have built-in unit testing, these scripts provide:

  • Consistent validation approach
  • Repeatable test scenarios
  • Clear pass/fail output
  • System behavior documentation

Best Practices

  1. Document Expected Behavior: Include comments about what should happen
  2. Test Boundary Conditions: Include edge cases and error conditions
  3. Measure Performance: Add timing for performance-critical components
  4. Visual Validation: For UI components, include visual checks
  5. Cleanup After Tests: Restore initial state to avoid side effects

Contributing

When adding new test files:

  1. Follow the naming and structure conventions
  2. Update this README with new test descriptions
  3. Ensure tests are self-contained and documented
  4. Test both success and failure scenarios
  5. Include performance considerations where relevant

This testing approach helps maintain code quality and provides validation tools for system changes and refactoring.