Post

Week 11 Update: Baseline Schedulers Implemented

Week 11 Update: Baseline Schedulers Implemented

Overview

This week marked a key step in validating the flexibility of our scheduler interface—we’ve implemented a set of baseline scheduling policies and created our own custom strategy to compare against them. These implementations not only serve as reference points but will also be used in upcoming performance evaluations.

Baseline Scheduler Implementations

To test the configurability of our system and provide users with usable defaults, we implemented the following scheduling strategies:

  • First-Come, First-Served (FCFS):
    • A simple queue-based policy where function invocations are scheduled in the order they arrive, without consideration for load or resource usage.
    • Provides a clear performance baseline and is easy to reason about.
  • Random Scheduling:
    • Selects a compute host randomly from the available pool.
    • Useful for observing non-deterministic behavior and evaluating how well the system handles unpredictability.

These basic schedulers demonstrate how straightforward it is to plug in new policies using our virtual class interface and centralized state model.

Custom Workload-Balancing Scheduler

We also developed our own workload-balancing scheduler, which aims to intelligently distribute invocations based on compute host load:

  • Design Goals:
    • Minimize host overload and improve resource utilization.
    • Avoid long queues on individual hosts by preferring less loaded options.
  • Approach:
    • The scheduler examines the number of currently running invocations and pending tasks per host.
    • It selects the compute host with the least combined workload, factoring in both current usage and queued operations.
  • Benchmarking Plans:
    • This scheduler will serve as our internal benchmark when comparing against FCFS and Random strategies under a variety of simulated workloads.
    • Future tests will measure metrics such as average completion time, resource efficiency, and system throughput.

Next Steps

  • Testing Under Load:
    • Begin running simulations using different workloads to evaluate the behavior and performance of each scheduling policy.
  • Metric Collection:
    • Integrate tools to collect performance data (e.g., invocation latency, resource saturation) for automated comparison.
  • User-Facing Templates:
    • Prepare stripped-down versions of these schedulers as example templates for users to build their own strategies on top of.

By implementing multiple scheduling strategies, we’ve demonstrated both the power and flexibility of our framework. Our workload-balancing scheduler will serve as a strong candidate for default use, while FCFS and Random provide solid comparison points to evaluate real-world effectiveness.

This post is licensed under CC BY 4.0 by the author.