Week 2 Update: Function Registration & Codebase Exploration
Week 2 Update: Function Registration & Codebase Exploration
Overview
This week marked our first sprint on the serverless compute feature for WRENCH. Our primary focus was to register serverless functions and gain a deep understanding of the overall code architecture, setting the stage for a robust implementation.
Project Background
Our project introduces a new service, ServerlessComputeService, that manages a serverless infrastructure. Key aspects of this service include:
- Head Host & Compute Hosts:
- A designated head host with an attached disk.
- Multiple compute hosts, each with its own disk, designed for concurrent function execution.
- Slot Management:
- Each compute host supports a defined number of concurrent function invocations (slots), with plans to refine this mechanism further.
- Image Storage & Management:
- The head host uses a storage service to maintain images (using an LRU strategy).
- Compute hosts maintain local storage for running containers, with priority given to active containers over idle images.
- Internal Service Initialization:
- Upon startup, an internal bare-metal compute service and associated storage services are created across all hosts.
- Function invocations are handled via custom actions submitted to this bare-metal service.
- FunctionManager Service:
- This service registers functions and manages their lifecycle—from invocation to state tracking (running, completed, timeout, or failure).
Week 1: Key Activities
1. Registering a Serverless Function
- Implementation:
We implemented the function registration within theServerlessComputeService. The registration process:- Sends a message from the registration function (inside the serverless compute service) to the thread running the main method.
- Processes the registration through the central message passing system, which forms the backbone of our inter-service communication.
- Workflow Integration:
This process is critical as it sets the standard for handling many subsequent operations within the service architecture.
2. Codebase Familiarization
- Understanding Service Threads:
Our investigation revealed that:- Each service (execution controller, function manager, compute service) operates on its own dedicated thread.
- Communication between these services is handled through message passing, ensuring asynchronous and decoupled interactions.
- Benefits:
- Modularity: Easier to isolate and manage functionalities.
- Scalability: Supports concurrent function invocations and efficient resource management.
Challenges & Learnings
- Concurrency Handling:
- A key challenge was ensuring that concurrent requests (e.g., downloading the same image multiple times) are managed effectively.
- Our solution tracks in-progress downloads to prevent redundant work.
- Message Passing Mechanism:
- Establishing a robust messaging framework was essential. This mechanism now supports not only function registration but will also be central to function invocation and state updates in future iterations.
Next Steps
- Extend API Functionality:
- Develop the function invocation process with detailed state tracking (completion, timeout, failure).
- Implement policies for node allocation and storage management for function executions.
- Optimize Concurrency & Caching:
- Enhance the handling of concurrent requests, particularly around image downloads and caching strategies.
- Integration Testing:
- Begin rigorous testing to ensure smooth inter-service communication and to validate the message passing workflow.
This week’s achievements have laid the groundwork for our serverless infrastructure. We are now better equipped to handle function registration and inter-thread communication, paving the way for more advanced features in the upcoming weeks.
This post is licensed under
CC BY 4.0
by the author.