4.8 KiB
4.8 KiB
Tests Directory
This directory contains test scripts and utilities for validating various systems and components in the Skelly project.
Overview
The tests/ directory is designed to house:
- System validation scripts
- Component testing utilities
- Integration tests
- Performance benchmarks
- Debugging tools
Current Test Files
test_logging.gd
Comprehensive test script for the DebugManager logging system.
Features:
- Tests all log levels (TRACE, DEBUG, INFO, WARN, ERROR, FATAL)
- Validates log level filtering functionality
- Tests category-based logging organization
- Verifies debug mode integration
- Demonstrates proper logging usage patterns
Usage:
# Option 1: Add as temporary autoload
# In project.godot, add: tests/test_logging.gd
# Option 2: Instantiate in a scene
var test_script = preload("res://tests/test_logging.gd").new()
add_child(test_script)
# Option 3: Run directly from editor
# Open the script and run the scene containing it
Expected Output: The script will output formatted log messages demonstrating:
- Proper timestamp formatting
- Log level filtering behavior
- Category organization
- Debug mode dependency for TRACE/DEBUG levels
Adding New Tests
When creating new test files, follow these conventions:
File Naming
- Use descriptive names starting with
test_ - Example:
test_audio_manager.gd,test_scene_transitions.gd
File Structure
extends Node
# Brief description of what this test validates
func _ready():
# Wait for system initialization if needed
await get_tree().process_frame
run_tests()
func run_tests():
print("=== Starting [System Name] Tests ===")
# Individual test functions
test_basic_functionality()
test_edge_cases()
test_error_conditions()
print("=== [System Name] Tests Complete ===")
func test_basic_functionality():
print("\\n--- Test: Basic Functionality ---")
# Test implementation
func test_edge_cases():
print("\\n--- Test: Edge Cases ---")
# Edge case testing
func test_error_conditions():
print("\\n--- Test: Error Conditions ---")
# Error condition testing
Testing Guidelines
- Independence: Each test should be self-contained and not depend on other tests
- Cleanup: Restore original state after testing (settings, debug modes, etc.)
- Clear Output: Use descriptive print statements to show test progress
- Error Handling: Test both success and failure conditions
- Documentation: Include comments explaining complex test scenarios
Integration with Main Project
- Temporary Usage: Test files are meant to be added temporarily during development
- Not in Production: These files should not be included in release builds
- Autoload Testing: Add to autoloads temporarily for automatic execution
- Manual Testing: Run individually when testing specific components
Test Categories
System Tests
Test core autoload managers and global systems:
test_logging.gd- DebugManager logging system- Future:
test_settings.gd- SettingsManager functionality - Future:
test_audio.gd- AudioManager functionality - Future:
test_scene_management.gd- GameManager transitions
Component Tests
Test individual game components:
- Future:
test_match3.gd- Match-3 gameplay mechanics - Future:
test_tile_system.gd- Tile behavior and interactions - Future:
test_ui_components.gd- Menu and UI functionality
Integration Tests
Test system interactions and workflows:
- Future:
test_game_flow.gd- Complete game session flow - Future:
test_debug_system.gd- Debug UI integration - Future:
test_localization.gd- Language switching and translations
Running Tests
During Development
- Copy or symlink the test file to your scene
- Add as a child node or autoload temporarily
- Run the project and observe console output
- Remove from project when testing is complete
Automated Testing
While Godot doesn't have built-in unit testing, these scripts provide:
- Consistent validation approach
- Repeatable test scenarios
- Clear pass/fail output
- System behavior documentation
Best Practices
- Document Expected Behavior: Include comments about what should happen
- Test Boundary Conditions: Include edge cases and error conditions
- Measure Performance: Add timing for performance-critical components
- Visual Validation: For UI components, include visual checks
- Cleanup After Tests: Restore initial state to avoid side effects
Contributing
When adding new test files:
- Follow the naming and structure conventions
- Update this README with new test descriptions
- Ensure tests are self-contained and documented
- Test both success and failure scenarios
- Include performance considerations where relevant
This testing approach helps maintain code quality and provides validation tools for system changes and refactoring.