Week 10 Update: Implementation Continues & Scheduler Testing Begins
Overview
This week’s efforts were focused on solidifying our scheduler infrastructure and laying the foundation for reliable testing. We’ve begun integrating Google Test to validate scheduler behavior, and we’ve officially committed to offering users complete control over scheduling paradigms via the virtual scheduler interface.
Scheduler Design Decision Finalized
After several rounds of discussion and prototyping, we’ve decided to fully embrace configurability in our scheduler design:
- User-Defined Scheduling Logic:
- Users will have unrestricted control over
manage_imagesandschedule_invocation, allowing for highly custom and potentially experimental scheduling strategies. - This choice prioritizes flexibility and future extensibility, empowering advanced users to tailor scheduling to their specific use cases.
- Users will have unrestricted control over
- Framework Responsibility:
- The framework will handle lifecycle management and simulation mechanics, but all core scheduling decisions will be user-driven through the interface.
While this increases the responsibility on users to implement correct logic, we believe it aligns better with our project’s goals of supporting diverse research and experimentation.
Implementation Progress
We’ve continued integrating the scheduler class into the overall system:
- Invocation Routing:
- The system now correctly calls into the user-defined
schedule_invocationduring function submission. - Selected hosts are used to dispatch the function invocation, with fallback checks in place to handle null or invalid responses.
- The system now correctly calls into the user-defined
- Image Handling:
- The
manage_imagesfunction is actively used to trigger image downloads and manage per-host image presence.
- The
- State Synchronization:
- Ongoing work ensures that all relevant state updates (e.g., function queues, host load) are reflected in the shared state class with consistent visibility.
Adding Google Test Support
To ensure reliability and catch edge cases early, we’ve begun introducing Google Test cases for the scheduler:
- Test Coverage Goals:
- Verify correct interaction between user schedulers and the core simulation engine.
- Ensure valid host selection across various custom scheduler implementations.
- Detect invalid behavior early (e.g., unhandled nulls, missing image downloads, resource violations).
- Initial Tests:
- Started writing unit tests for basic built-in schedulers (e.g., FCFS, Random) as reference implementations.
- Set up test fixtures to simulate realistic system state for scheduler evaluation.
Next Steps
- Expand Test Suite:
- Add more test scenarios, including edge cases and failure conditions (e.g., all hosts full, image unavailable).
- Introduce stress tests for high-load function submissions with varying scheduler logic.
- Continue Integration:
- Finalize remaining pieces of scheduler interaction with system components (completion tracking, resource rollback on failure).
- User Documentation:
- Begin drafting documentation and examples to help users implement their own scheduler subclasses.
With our design now locked in and testing infrastructure in place, we’re entering the final phase of development for the scheduler framework. Our focus will be on tightening the integration, ensuring correctness, and delivering a solid foundation for custom scheduling research and experimentation.