add unit tests
saveload fixes
This commit is contained in:
201
docs/TESTING.md
201
docs/TESTING.md
@@ -1,10 +1,10 @@
|
||||
# Tests Directory
|
||||
|
||||
This directory contains test scripts and utilities for validating various systems and components in the Skelly project.
|
||||
Test scripts and utilities for validating Skelly project systems.
|
||||
|
||||
## Overview
|
||||
|
||||
The `tests/` directory is designed to house:
|
||||
The `tests/` directory contains:
|
||||
- System validation scripts
|
||||
- Component testing utilities
|
||||
- Integration tests
|
||||
@@ -14,14 +14,14 @@ The `tests/` directory is designed to house:
|
||||
## Current Test Files
|
||||
|
||||
### `test_logging.gd`
|
||||
Comprehensive test script for the DebugManager logging system.
|
||||
Test script for DebugManager logging system.
|
||||
|
||||
**Features:**
|
||||
- Tests all log levels (TRACE, DEBUG, INFO, WARN, ERROR, FATAL)
|
||||
- Validates log level filtering functionality
|
||||
- Tests category-based logging organization
|
||||
- Validates log level filtering
|
||||
- Tests category-based logging
|
||||
- Verifies debug mode integration
|
||||
- Demonstrates proper logging usage patterns
|
||||
- Demonstrates logging usage patterns
|
||||
|
||||
**Usage:**
|
||||
```gdscript
|
||||
@@ -37,15 +37,15 @@ add_child(test_script)
|
||||
```
|
||||
|
||||
**Expected Output:**
|
||||
The script will output formatted log messages demonstrating:
|
||||
- Proper timestamp formatting
|
||||
- Log level filtering behavior
|
||||
Formatted log messages showing:
|
||||
- Timestamp formatting
|
||||
- Log level filtering
|
||||
- Category organization
|
||||
- Debug mode dependency for TRACE/DEBUG levels
|
||||
|
||||
## Adding New Tests
|
||||
|
||||
When creating new test files, follow these conventions:
|
||||
Follow these conventions for new test files:
|
||||
|
||||
### File Naming
|
||||
- Use descriptive names starting with `test_`
|
||||
@@ -87,33 +87,37 @@ func test_error_conditions():
|
||||
|
||||
### Testing Guidelines
|
||||
|
||||
1. **Independence**: Each test should be self-contained and not depend on other tests
|
||||
2. **Cleanup**: Restore original state after testing (settings, debug modes, etc.)
|
||||
3. **Clear Output**: Use descriptive print statements to show test progress
|
||||
4. **Error Handling**: Test both success and failure conditions
|
||||
5. **Documentation**: Include comments explaining complex test scenarios
|
||||
1. **Independence**: Each test is self-contained
|
||||
2. **Cleanup**: Restore original state after testing
|
||||
3. **Clear Output**: Use descriptive print statements
|
||||
4. **Error Handling**: Test success and failure conditions
|
||||
5. **Documentation**: Comment complex test scenarios
|
||||
|
||||
### Integration with Main Project
|
||||
|
||||
- **Temporary Usage**: Test files are meant to be added temporarily during development
|
||||
- **Not in Production**: These files should not be included in release builds
|
||||
- **Temporary Usage**: Add test files temporarily during development
|
||||
- **Not in Production**: Exclude from release builds
|
||||
- **Autoload Testing**: Add to autoloads temporarily for automatic execution
|
||||
- **Manual Testing**: Run individually when testing specific components
|
||||
- **Manual Testing**: Run individually for specific components
|
||||
|
||||
## Test Categories
|
||||
|
||||
### System Tests
|
||||
Test core autoload managers and global systems:
|
||||
- `test_logging.gd` - DebugManager logging system
|
||||
- Future: `test_settings.gd` - SettingsManager functionality
|
||||
- Future: `test_audio.gd` - AudioManager functionality
|
||||
- Future: `test_scene_management.gd` - GameManager transitions
|
||||
- `test_checksum_issue.gd` - SaveManager checksum validation and deterministic hashing
|
||||
- `test_migration_compatibility.gd` - SaveManager version migration and backward compatibility
|
||||
- `test_save_system_integration.gd` - Complete save/load workflow integration testing
|
||||
- `test_checksum_fix_verification.gd` - Verification of JSON serialization checksum fixes
|
||||
- `test_settings_manager.gd` - SettingsManager security validation, input validation, and error handling
|
||||
- `test_game_manager.gd` - GameManager scene transitions, race condition protection, and input validation
|
||||
- `test_audio_manager.gd` - AudioManager functionality, resource loading, and volume management
|
||||
|
||||
### Component Tests
|
||||
Test individual game components:
|
||||
- Future: `test_match3.gd` - Match-3 gameplay mechanics
|
||||
- Future: `test_tile_system.gd` - Tile behavior and interactions
|
||||
- Future: `test_ui_components.gd` - Menu and UI functionality
|
||||
- `test_match3_gameplay.gd` - Match-3 gameplay mechanics, grid management, and match detection
|
||||
- `test_tile.gd` - Tile component behavior, visual feedback, and memory safety
|
||||
- `test_value_stepper.gd` - ValueStepper UI component functionality and settings integration
|
||||
|
||||
### Integration Tests
|
||||
Test system interactions and workflows:
|
||||
@@ -121,36 +125,141 @@ Test system interactions and workflows:
|
||||
- Future: `test_debug_system.gd` - Debug UI integration
|
||||
- Future: `test_localization.gd` - Language switching and translations
|
||||
|
||||
## Save System Testing Protocols
|
||||
|
||||
SaveManager implements security features requiring testing for modifications.
|
||||
|
||||
### Critical Test Suites
|
||||
|
||||
#### **`test_checksum_issue.gd`** - Checksum Validation
|
||||
**Tests**: Checksum generation, JSON serialization consistency, save/load cycles
|
||||
**Usage**: Run after checksum algorithm changes
|
||||
|
||||
#### **`test_migration_compatibility.gd`** - Version Migration
|
||||
**Tests**: Backward compatibility, missing field addition, data structure normalization
|
||||
**Usage**: Test save format upgrades
|
||||
|
||||
#### **`test_save_system_integration.gd`** - End-to-End Integration
|
||||
**Tests**: Save/load workflow, grid state serialization, race condition prevention
|
||||
**Usage**: Run after SaveManager modifications
|
||||
|
||||
#### **`test_checksum_fix_verification.gd`** - JSON Serialization Fix
|
||||
**Tests**: Checksum consistency, int/float conversion, type safety validation
|
||||
**Usage**: Test JSON type conversion fixes
|
||||
|
||||
### Save System Security Testing
|
||||
|
||||
#### **Required Tests Before SaveManager Changes**
|
||||
1. Run 4 save system test suites
|
||||
2. Test tamper detection by modifying save files
|
||||
3. Validate error recovery by corrupting files
|
||||
4. Check race condition protection
|
||||
5. Verify permissive validation
|
||||
|
||||
#### **Performance Benchmarks**
|
||||
- Checksum calculation: < 1ms
|
||||
- Memory usage: File size limits prevent exhaustion
|
||||
- Error recovery: Never crash regardless of corruption
|
||||
- Data preservation: User scores survive migration
|
||||
|
||||
#### **Test Sequence After Modifications**
|
||||
1. `test_checksum_issue.gd` - Verify checksum consistency
|
||||
2. `test_migration_compatibility.gd` - Check version upgrades
|
||||
3. `test_save_system_integration.gd` - Validate workflow
|
||||
4. Manual testing with corrupted files
|
||||
5. Performance validation
|
||||
|
||||
**Failure Response**: Test failure indicates corruption risk. Do not commit until all tests pass.
|
||||
|
||||
## Running Tests
|
||||
|
||||
### During Development
|
||||
1. Copy or symlink the test file to your scene
|
||||
2. Add as a child node or autoload temporarily
|
||||
3. Run the project and observe console output
|
||||
4. Remove from project when testing is complete
|
||||
### Manual Test Execution
|
||||
|
||||
### Automated Testing
|
||||
While Godot doesn't have built-in unit testing, these scripts provide:
|
||||
- Consistent validation approach
|
||||
- Repeatable test scenarios
|
||||
- Clear pass/fail output
|
||||
- System behavior documentation
|
||||
#### **Direct Script Execution (Recommended)**
|
||||
```bash
|
||||
# Run specific test
|
||||
godot --headless --script tests/test_checksum_issue.gd
|
||||
|
||||
# Run all save system tests
|
||||
godot --headless --script tests/test_checksum_issue.gd
|
||||
godot --headless --script tests/test_migration_compatibility.gd
|
||||
godot --headless --script tests/test_save_system_integration.gd
|
||||
```
|
||||
|
||||
#### **Other Methods**
|
||||
- **Temporary Autoload**: Add to project.godot autoloads temporarily, run with F5
|
||||
- **Scene-based**: Create temporary scene, add test script as child, run with F6
|
||||
- **Editor**: Open test file, attach to scene, run with F6
|
||||
|
||||
### Automated Test Execution
|
||||
|
||||
Use provided scripts `run_tests.bat` (Windows) or `run_tests.sh` (Linux/Mac) to run all tests sequentially.
|
||||
|
||||
For CI/CD integration:
|
||||
```yaml
|
||||
- name: Run Test Suite
|
||||
run: |
|
||||
godot --headless --script tests/test_checksum_issue.gd
|
||||
godot --headless --script tests/test_migration_compatibility.gd
|
||||
# Add other tests as needed
|
||||
```
|
||||
|
||||
### Expected Test Output
|
||||
|
||||
#### **Successful Test Run:**
|
||||
```
|
||||
=== Testing Checksum Issue Fix ===
|
||||
Testing checksum consistency across save/load cycles...
|
||||
✅ SUCCESS: Checksums are deterministic
|
||||
✅ SUCCESS: JSON serialization doesn't break checksums
|
||||
✅ SUCCESS: Save/load cycle maintains checksum integrity
|
||||
=== Test Complete ===
|
||||
```
|
||||
|
||||
#### **Failed Test Run:**
|
||||
```
|
||||
=== Testing Checksum Issue Fix ===
|
||||
Testing checksum consistency across save/load cycles...
|
||||
❌ FAILURE: Checksum mismatch detected
|
||||
Expected: 1234567890
|
||||
Got: 9876543210
|
||||
=== Test Failed ===
|
||||
```
|
||||
|
||||
### Test Execution Best Practices
|
||||
|
||||
**Before**: Remove existing save files, verify autoloads configured, run one test at a time
|
||||
**During**: Monitor console output, note timing (tests complete within seconds)
|
||||
**After**: Clean up temporary files, document issues
|
||||
|
||||
### Troubleshooting
|
||||
|
||||
**Common Issues:**
|
||||
- Permission errors: Run with elevated permissions if needed
|
||||
- Missing dependencies: Ensure autoloads configured
|
||||
- Timeout issues: Add timeout for hung tests
|
||||
- Path issues: Use absolute paths if relative paths fail
|
||||
|
||||
### Performance Benchmarks
|
||||
|
||||
Expected execution times: Individual tests < 5 seconds, total suite < 35 seconds.
|
||||
|
||||
If tests take longer, investigate file I/O issues, memory leaks, infinite loops, or external dependencies.
|
||||
|
||||
## Best Practices
|
||||
|
||||
1. **Document Expected Behavior**: Include comments about what should happen
|
||||
2. **Test Boundary Conditions**: Include edge cases and error conditions
|
||||
3. **Measure Performance**: Add timing for performance-critical components
|
||||
4. **Visual Validation**: For UI components, include visual checks
|
||||
5. **Cleanup After Tests**: Restore initial state to avoid side effects
|
||||
1. Document expected behavior
|
||||
2. Test boundary conditions and edge cases
|
||||
3. Measure performance for critical components
|
||||
4. Include visual validation for UI components
|
||||
5. Cleanup after tests
|
||||
|
||||
## Contributing
|
||||
|
||||
When adding new test files:
|
||||
1. Follow the naming and structure conventions
|
||||
2. Update this README with new test descriptions
|
||||
When adding test files:
|
||||
1. Follow naming and structure conventions
|
||||
2. Update this README with test descriptions
|
||||
3. Ensure tests are self-contained and documented
|
||||
4. Test both success and failure scenarios
|
||||
5. Include performance considerations where relevant
|
||||
4. Test success and failure scenarios
|
||||
|
||||
This testing approach helps maintain code quality and provides validation tools for system changes and refactoring.
|
||||
This testing approach maintains code quality and provides validation tools for system changes.
|
||||
|
||||
Reference in New Issue
Block a user