9.1 KiB
Tests Directory
Test scripts and utilities for validating Skelly project systems.
Related Documentation:
- Coding Standards: CODE_OF_CONDUCT.md - Follow coding standards when writing tests
- Architecture: ARCHITECTURE.md - Understand system design before testing
Overview
The tests/ directory contains:
- System validation scripts
- Component testing utilities
- Integration tests
- Performance benchmarks
- Debugging tools
📋 File Naming: All test files follow the naming conventions with PascalCase and "Test" prefix (e.g.,
TestAudioManager.gd).
Current Test Files
TestLogging.gd
Test script for DebugManager logging system.
Features:
- Tests all log levels (TRACE, DEBUG, INFO, WARN, ERROR, FATAL)
- Validates log level filtering
- Tests category-based logging
- Verifies debug mode integration
- Demonstrates logging usage patterns
Usage:
# Option 1: Add as temporary autoload
# In project.godot, add: tests/TestLogging.gd
# Option 2: Instantiate in a scene
var test_script = preload("res://tests/TestLogging.gd").new()
add_child(test_script)
# Option 3: Run directly from editor
# Open the script and run the scene containing it
Expected Output: Formatted log messages showing:
- Timestamp formatting
- Log level filtering
- Category organization
- Debug mode dependency for TRACE/DEBUG levels
Adding New Tests
Follow these conventions for new test files:
File Naming
- Use descriptive names starting with
test_ - Example:
TestAudioManager.gd,test_scene_transitions.gd
File Structure
extends Node
# Brief description of what this test validates
func _ready():
# Wait for system initialization if needed
await get_tree().process_frame
run_tests()
func run_tests():
print("=== Starting [System Name] Tests ===")
# Individual test functions
test_basic_functionality()
test_edge_cases()
test_error_conditions()
print("=== [System Name] Tests Complete ===")
func test_basic_functionality():
print("\\n--- Test: Basic Functionality ---")
# Test implementation
func test_edge_cases():
print("\\n--- Test: Edge Cases ---")
# Edge case testing
func test_error_conditions():
print("\\n--- Test: Error Conditions ---")
# Error condition testing
Testing Guidelines
- Independence: Each test is self-contained
- Cleanup: Restore original state after testing
- Clear Output: Use descriptive print statements
- Error Handling: Test success and failure conditions
- Documentation: Comment complex test scenarios
Integration with Main Project
- Temporary Usage: Add test files temporarily during development
- Not in Production: Exclude from release builds
- Autoload Testing: Add to autoloads temporarily for automatic execution
- Manual Testing: Run individually for specific components
Test Categories
System Tests
Test core autoload managers and global systems:
TestLogging.gd- DebugManager logging systemtest_checksum_issue.gd- SaveManager checksum validation and deterministic hashingTestMigrationCompatibility.gd- SaveManager version migration and backward compatibilitytest_save_system_integration.gd- Complete save/load workflow integration testingtest_checksum_fix_verification.gd- Verification of JSON serialization checksum fixesTestSettingsManager.gd- SettingsManager security validation, input validation, and error handlingTestGameManager.gd- GameManager scene transitions, race condition protection, and input validationTestAudioManager.gd- AudioManager functionality, resource loading, and volume management
Component Tests
Test individual game components:
TestMatch3Gameplay.gd- Match-3 gameplay mechanics, grid management, and match detectionTestTile.gd- Tile component behavior, visual feedback, and memory safetyTestValueStepper.gd- ValueStepper UI component functionality and settings integration
Integration Tests
Test system interactions and workflows:
- Future:
test_game_flow.gd- Complete game session flow - Future:
test_debug_system.gd- Debug UI integration - Future:
test_localization.gd- Language switching and translations
Save System Testing Protocols
SaveManager implements security features requiring testing for modifications.
Critical Test Suites
test_checksum_issue.gd - Checksum Validation
Tests: Checksum generation, JSON serialization consistency, save/load cycles Usage: Run after checksum algorithm changes
TestMigrationCompatibility.gd - Version Migration
Tests: Backward compatibility, missing field addition, data structure normalization Usage: Test save format upgrades
test_save_system_integration.gd - End-to-End Integration
Tests: Save/load workflow, grid state serialization, race condition prevention Usage: Run after SaveManager modifications
test_checksum_fix_verification.gd - JSON Serialization Fix
Tests: Checksum consistency, int/float conversion, type safety validation Usage: Test JSON type conversion fixes
Save System Security Testing
Required Tests Before SaveManager Changes
- Run 4 save system test suites
- Test tamper detection by modifying save files
- Validate error recovery by corrupting files
- Check race condition protection
- Verify permissive validation
Performance Benchmarks
- Checksum calculation: < 1ms
- Memory usage: File size limits prevent exhaustion
- Error recovery: Never crash regardless of corruption
- Data preservation: User scores survive migration
Test Sequence After Modifications
test_checksum_issue.gd- Verify checksum consistencyTestMigrationCompatibility.gd- Check version upgradestest_save_system_integration.gd- Validate workflow- Manual testing with corrupted files
- Performance validation
Failure Response: Test failure indicates corruption risk. Do not commit until all tests pass.
Running Tests
Manual Test Execution
Direct Script Execution (Recommended)
# Run specific test
godot --headless --script tests/test_checksum_issue.gd
# Run all save system tests
godot --headless --script tests/test_checksum_issue.gd
godot --headless --script tests/TestMigrationCompatibility.gd
godot --headless --script tests/test_save_system_integration.gd
Other Methods
- Temporary Autoload: Add to project.godot autoloads temporarily, run with F5
- Scene-based: Create temporary scene, add test script as child, run with F6
- Editor: Open test file, attach to scene, run with F6
Automated Test Execution
Use provided scripts run_tests.bat (Windows) or run_tests.sh (Linux/Mac) to run all tests sequentially.
For CI/CD integration:
- name: Run Test Suite
run: |
godot --headless --script tests/test_checksum_issue.gd
godot --headless --script tests/TestMigrationCompatibility.gd
# Add other tests as needed
Expected Test Output
Successful Test Run:
=== Testing Checksum Issue Fix ===
Testing checksum consistency across save/load cycles...
✅ SUCCESS: Checksums are deterministic
✅ SUCCESS: JSON serialization doesn't break checksums
✅ SUCCESS: Save/load cycle maintains checksum integrity
=== Test Complete ===
Failed Test Run:
=== Testing Checksum Issue Fix ===
Testing checksum consistency across save/load cycles...
❌ FAILURE: Checksum mismatch detected
Expected: 1234567890
Got: 9876543210
=== Test Failed ===
Test Execution Best Practices
Before: Remove existing save files, verify autoloads configured, run one test at a time During: Monitor console output, note timing (tests complete within seconds) After: Clean up temporary files, document issues
Troubleshooting
Common Issues:
- Permission errors: Run with elevated permissions if needed
- Missing dependencies: Ensure autoloads configured
- Timeout issues: Add timeout for hung tests
- Path issues: Use absolute paths if relative paths fail
Performance Benchmarks
Expected execution times: Individual tests < 5 seconds, total suite < 35 seconds.
If tests take longer, investigate file I/O issues, memory leaks, infinite loops, or external dependencies.
Best Practices
- Document expected behavior
- Test boundary conditions and edge cases
- Measure performance for critical components
- Include visual validation for UI components
- Cleanup after tests
Contributing
When adding test files:
- Follow naming conventions
- Follow coding standards for test code quality
- Understand system architecture before writing integration tests
- Update this file with test descriptions
- Ensure tests are self-contained and documented
- Test success and failure scenarios
This testing approach maintains code quality and provides validation tools for system changes.
See Also:
- CODE_OF_CONDUCT.md - Quality checklist before committing
- ARCHITECTURE.md - System design and architectural patterns