Files
skelly/docs/TESTING.md
Vladimir nett00n Budylnikov 538459f323
Some checks failed
Continuous Integration / Code Formatting (push) Successful in 31s
Continuous Integration / Code Quality Check (push) Successful in 28s
Continuous Integration / Test Execution (push) Failing after 17s
Continuous Integration / CI Summary (push) Failing after 4s
codemap generation
2025-10-01 14:36:21 +04:00

9.1 KiB

Tests Directory

Test scripts and utilities for validating Skelly project systems.

Related Documentation:

Overview

The tests/ directory contains:

  • System validation scripts
  • Component testing utilities
  • Integration tests
  • Performance benchmarks
  • Debugging tools

📋 File Naming: All test files follow the naming conventions with PascalCase and "Test" prefix (e.g., TestAudioManager.gd).

Current Test Files

TestLogging.gd

Test script for DebugManager logging system.

Features:

  • Tests all log levels (TRACE, DEBUG, INFO, WARN, ERROR, FATAL)
  • Validates log level filtering
  • Tests category-based logging
  • Verifies debug mode integration
  • Demonstrates logging usage patterns

Usage:

# Option 1: Add as temporary autoload
# In project.godot, add: tests/TestLogging.gd

# Option 2: Instantiate in a scene
var test_script = preload("res://tests/TestLogging.gd").new()
add_child(test_script)

# Option 3: Run directly from editor
# Open the script and run the scene containing it

Expected Output: Formatted log messages showing:

  • Timestamp formatting
  • Log level filtering
  • Category organization
  • Debug mode dependency for TRACE/DEBUG levels

Adding New Tests

Follow these conventions for new test files:

File Naming

  • Use descriptive names starting with test_
  • Example: TestAudioManager.gd, test_scene_transitions.gd

File Structure

extends Node

# Brief description of what this test validates

func _ready():
    # Wait for system initialization if needed
    await get_tree().process_frame
    run_tests()

func run_tests():
    print("=== Starting [System Name] Tests ===")

    # Individual test functions
    test_basic_functionality()
    test_edge_cases()
    test_error_conditions()

    print("=== [System Name] Tests Complete ===")

func test_basic_functionality():
    print("\\n--- Test: Basic Functionality ---")
    # Test implementation

func test_edge_cases():
    print("\\n--- Test: Edge Cases ---")
    # Edge case testing

func test_error_conditions():
    print("\\n--- Test: Error Conditions ---")
    # Error condition testing

Testing Guidelines

  1. Independence: Each test is self-contained
  2. Cleanup: Restore original state after testing
  3. Clear Output: Use descriptive print statements
  4. Error Handling: Test success and failure conditions
  5. Documentation: Comment complex test scenarios

Integration with Main Project

  • Temporary Usage: Add test files temporarily during development
  • Not in Production: Exclude from release builds
  • Autoload Testing: Add to autoloads temporarily for automatic execution
  • Manual Testing: Run individually for specific components

Test Categories

System Tests

Test core autoload managers and global systems:

  • TestLogging.gd - DebugManager logging system
  • test_checksum_issue.gd - SaveManager checksum validation and deterministic hashing
  • TestMigrationCompatibility.gd - SaveManager version migration and backward compatibility
  • test_save_system_integration.gd - Complete save/load workflow integration testing
  • test_checksum_fix_verification.gd - Verification of JSON serialization checksum fixes
  • TestSettingsManager.gd - SettingsManager security validation, input validation, and error handling
  • TestGameManager.gd - GameManager scene transitions, race condition protection, and input validation
  • TestAudioManager.gd - AudioManager functionality, resource loading, and volume management

Component Tests

Test individual game components:

  • TestMatch3Gameplay.gd - Match-3 gameplay mechanics, grid management, and match detection
  • TestTile.gd - Tile component behavior, visual feedback, and memory safety
  • TestValueStepper.gd - ValueStepper UI component functionality and settings integration

Integration Tests

Test system interactions and workflows:

  • Future: test_game_flow.gd - Complete game session flow
  • Future: test_debug_system.gd - Debug UI integration
  • Future: test_localization.gd - Language switching and translations

Save System Testing Protocols

SaveManager implements security features requiring testing for modifications.

Critical Test Suites

test_checksum_issue.gd - Checksum Validation

Tests: Checksum generation, JSON serialization consistency, save/load cycles Usage: Run after checksum algorithm changes

TestMigrationCompatibility.gd - Version Migration

Tests: Backward compatibility, missing field addition, data structure normalization Usage: Test save format upgrades

test_save_system_integration.gd - End-to-End Integration

Tests: Save/load workflow, grid state serialization, race condition prevention Usage: Run after SaveManager modifications

test_checksum_fix_verification.gd - JSON Serialization Fix

Tests: Checksum consistency, int/float conversion, type safety validation Usage: Test JSON type conversion fixes

Save System Security Testing

Required Tests Before SaveManager Changes

  1. Run 4 save system test suites
  2. Test tamper detection by modifying save files
  3. Validate error recovery by corrupting files
  4. Check race condition protection
  5. Verify permissive validation

Performance Benchmarks

  • Checksum calculation: < 1ms
  • Memory usage: File size limits prevent exhaustion
  • Error recovery: Never crash regardless of corruption
  • Data preservation: User scores survive migration

Test Sequence After Modifications

  1. test_checksum_issue.gd - Verify checksum consistency
  2. TestMigrationCompatibility.gd - Check version upgrades
  3. test_save_system_integration.gd - Validate workflow
  4. Manual testing with corrupted files
  5. Performance validation

Failure Response: Test failure indicates corruption risk. Do not commit until all tests pass.

Running Tests

Manual Test Execution

# Run specific test
godot --headless --script tests/test_checksum_issue.gd

# Run all save system tests
godot --headless --script tests/test_checksum_issue.gd
godot --headless --script tests/TestMigrationCompatibility.gd
godot --headless --script tests/test_save_system_integration.gd

Other Methods

  • Temporary Autoload: Add to project.godot autoloads temporarily, run with F5
  • Scene-based: Create temporary scene, add test script as child, run with F6
  • Editor: Open test file, attach to scene, run with F6

Automated Test Execution

Use provided scripts run_tests.bat (Windows) or run_tests.sh (Linux/Mac) to run all tests sequentially.

For CI/CD integration:

- name: Run Test Suite
  run: |
    godot --headless --script tests/test_checksum_issue.gd
    godot --headless --script tests/TestMigrationCompatibility.gd
    # Add other tests as needed

Expected Test Output

Successful Test Run:

=== Testing Checksum Issue Fix ===
Testing checksum consistency across save/load cycles...
✅ SUCCESS: Checksums are deterministic
✅ SUCCESS: JSON serialization doesn't break checksums
✅ SUCCESS: Save/load cycle maintains checksum integrity
=== Test Complete ===

Failed Test Run:

=== Testing Checksum Issue Fix ===
Testing checksum consistency across save/load cycles...
❌ FAILURE: Checksum mismatch detected
Expected: 1234567890
Got: 9876543210
=== Test Failed ===

Test Execution Best Practices

Before: Remove existing save files, verify autoloads configured, run one test at a time During: Monitor console output, note timing (tests complete within seconds) After: Clean up temporary files, document issues

Troubleshooting

Common Issues:

  • Permission errors: Run with elevated permissions if needed
  • Missing dependencies: Ensure autoloads configured
  • Timeout issues: Add timeout for hung tests
  • Path issues: Use absolute paths if relative paths fail

Performance Benchmarks

Expected execution times: Individual tests < 5 seconds, total suite < 35 seconds.

If tests take longer, investigate file I/O issues, memory leaks, infinite loops, or external dependencies.

Best Practices

  1. Document expected behavior
  2. Test boundary conditions and edge cases
  3. Measure performance for critical components
  4. Include visual validation for UI components
  5. Cleanup after tests

Contributing

When adding test files:

  1. Follow naming conventions
  2. Follow coding standards for test code quality
  3. Understand system architecture before writing integration tests
  4. Update this file with test descriptions
  5. Ensure tests are self-contained and documented
  6. Test success and failure scenarios

This testing approach maintains code quality and provides validation tools for system changes.

See Also: