Flyte

A Comprehensive ML Pipeline Monitoring Tool

Tirth Jain
10 min readMar 25, 2024
Ref: https://eng.lyft.com/introducing-flyte-cloud-native-machine-learning-and-data-processing-platform-fb2bb3046a59

Enhance Your Machine Learning Workflow with Flyte

In the realm of machine learning, the journey from model development to deployment is often filled with complexities. Among these challenges, ensuring the robustness and reliability of deployed models is paramount. This is where monitoring tools like Flyte come into play, offering a comprehensive solution to streamline the monitoring process and enhance the overall efficiency of ML workflows.

What is Flyte?

Flyte is an open-source, cloud-native platform designed to simplify and automate machine learning workflows. Developed by Lyft, Flyte aims to address the various pain points encountered during the ML lifecycle, including data processing, model training, deployment, and monitoring. One of its standout features is its monitoring capabilities, which enable users to track the performance and health of ML models in real-time.

Ref: https://www.fiddler.ai/ml-model-monitoring/model-monitoring-tools

Why Monitoring is Crucial

Effective monitoring is crucial for several reasons:

  1. Performance Tracking: Monitoring allows developers to track the performance metrics of ML models over time, enabling them to identify any deviations or anomalies that may arise.
  2. Resource Optimization: By monitoring resource utilization, such as CPU and memory usage, developers can optimize infrastructure allocation and ensure efficient utilization of resources.
  3. Detecting Drift: Model drift, where the performance of a deployed model degrades over time due to changes in data distribution, can have significant implications. Monitoring helps detect such drift early on, allowing for timely model retraining or recalibration.
  4. Ensuring Compliance: For applications in regulated industries, monitoring ensures compliance with regulatory requirements by providing audit trails and documentation of model behavior.

The Problem

Effective monitoring within ML pipelines is indispensable for several reasons. Firstly, it enables early detection of issues such as model drift, where the performance of a model degrades over time due to changes in data distribution or user behavior. Secondly, it facilitates resource optimization by tracking infrastructure utilization metrics, ensuring efficient allocation of resources. Thirdly, it plays a crucial role in maintaining regulatory compliance, providing audit trails and documentation of model behavior. However, traditional monitoring approaches often lack the flexibility and scalability required to meet the evolving demands of ML workflows, necessitating the need for a more robust solution.

Key Features of Flyte’s Monitoring Tool

Flyte’s monitoring tool offers a range of features tailored to meet the needs of ML practitioners:

  1. Customizable Dashboards: Users can create personalized dashboards to visualize key performance metrics, such as accuracy, latency, and resource utilization, in real-time.
  2. Alerting Mechanisms: Flyte allows users to set up alerts based on predefined thresholds or anomalies detected in model performance, ensuring timely notifications of any issues.
  3. Model Drift Detection: Leveraging advanced statistical techniques, Flyte’s monitoring tool can detect drift in model performance and data distribution, enabling proactive measures to maintain model accuracy.
  4. Integration with ML Lifecycle: Flyte seamlessly integrates with other stages of the ML lifecycle, such as model training and deployment, ensuring a cohesive workflow from development to production.

Getting Started with Flyte Monitoring

Getting started with Flyte monitoring is pretty straightforward and in this section, we will go through the steps required to containerize an entire ML pipeline for a movie recommendation service, basically MLOps, by creating a local k8s cluster and automatically building a Docker Image and pushing it to the same, which is the primary motive of Flyte. At the end, it will create a dashboard that we can access and observe the functioning of the entire project with different tasks being performed and their respective inputs and outputs.

Step 1 — Installation:

Flyte provides comprehensive documentation and installation guides to help users set up the platform within their infrastructure or cloud environment. First, we need to install the required library to use Flyte workflow monitoring tool.

$ pip3 install -U flytekit

Step 2 — Initialize a pyflyte project:

This step is crucial to set up a proper working directory for Flyte workflow to run on.

$ pyflyte init my_project

It will create a sample project with the following directory and file structure:

my_project
├── LICENSE
├── README.md
├── requirements.txt # Python dependencies
└── workflows
├── __init__.py
└── example.py # Example Flyte workflow code

You can modify the files inside the workflows folder according to your project requirements and run pip3 install -r requirements.txt accordingly.

We will use the following example.py for our intended purpose of recommending a list of 2 movies to an encountered user_id after training a TruncatedSVD model on a sample dataset.

The data is defined in a format that is based on a movie streaming service scenario that enables the service to return recommended movies based on requests received by the users who are using the service using their past ratings and also incoming new ratings, where the ratings range from 1 to 5 (both inclusive).

The data that we have defined in the following code was sampled from a Kafka stream which is continuing producing data about user ratings in the following format:

<time>,<userid>,GET /rate/<movieid>=<rating>

— the user rates a movie with 1 to 5 stars

# example.py

import typing
import numpy as np
import pandas as pd
from sklearn.decomposition import TruncatedSVD
from flytekit import task, workflow

# Define your movie data (for demonstration, you can replace this with your own dataset)
movie_data = {
'user_id': [1, 1, 2, 2, 3, 3, 4, 4, 5, 5],
'movie_id': [101, 102, 101, 102, 102, 103, 103, 104, 104, 105],
'rating': [5, 4, 4, 3, 5, 3, 4, 2, 3, 1]
}
movies_df = pd.DataFrame(movie_data)

@task
def load_data() -> pd.DataFrame:
"""Load movie data."""
return movies_df

@task
def preprocess_data(data: pd.DataFrame) -> pd.DataFrame:
"""Preprocess movie data if needed."""
# Pivot the DataFrame to create a user-movie matrix
pivoted_data = data.pivot(index='user_id', columns='movie_id', values='rating').fillna(0)
return pivoted_data

@task
def train_svd(data: pd.DataFrame, n_components: int = 2) -> TruncatedSVD:
"""Train Truncated SVD model."""
svd = TruncatedSVD(n_components=n_components)
svd.fit(data)
return svd

@task
def recommend_movies(svd: TruncatedSVD, user_id: int, n_recommendations: int, data: pd.DataFrame) -> typing.List[int]:
"""Recommend movies based on the encountered user_id."""
user_ratings = data.loc[user_id].dropna()

# Prepare user-specific data for prediction
user_ratings_numeric = np.array(user_ratings.values).reshape(1, -1)

# Predict ratings for the user
predicted_ratings = svd.transform(user_ratings_numeric)

# Calculate similarity between user preferences and movie ratings
similarity = np.dot(predicted_ratings, svd.components_)

# Get indices of top recommended movies
top_movie_indices = similarity.argsort(axis=1)[:, -n_recommendations:]

# Extract movie IDs for recommendations and convert to integers
recommendations = [int(data.columns[idx]) for idx in top_movie_indices[0]]

return recommendations

@workflow
def movie_recommendation_workflow() -> typing.List[int]:
"""Workflow to recommend movies."""
data = load_data()
preprocessed_data = preprocess_data(data=data)
svd_model = train_svd(data=preprocessed_data, n_components=5)
recommendations = recommend_movies(svd=svd_model, user_id=2, data=preprocessed_data, n_recommendations=2)
return recommendations # type: ignore

if __name__ == "__main__":
# Execute the workflow
print(f"Recommended Movies: {movie_recommendation_workflow()}")

Step 3 — Part1 : Running on Local Python environment

Run the workflow from the workflows folder of your project (refer to the structure in Step 2):

$ pyflyte run example.py movie_recommendation_workflow

# where the file name is `example.py` and the name of the worflow defined in
# the file is `movie_recommendation_workflow`

Output:

Figure 1: Running flyte workflow on local python environment

As you can see in the output that running the entire workflow returns a list of 2 recommended movies for user_id = 2

Step 3 — Part 2 : Running in a Local Flyte cluster

Prerequisites:

  • Git & GitHub
  • Docker
  • Install flytectl
    (Corresponding to your system — Mac/Linux/Windows)

Set theFLYTECTL_CONFIG environment variable (VERY IMPORTANT):

$ export FLYTECTL_CONFIG=~/.flyte/config.yaml

Start the docker daemon on your machine to ensure that pyflyte is able to access docker services

Next, fire up the local cluster (demo) using the following command:

$ flytectl demo start
Figure 2: Creation of flytectl cluster and pushing respective docker images

This command will start the cluster, build requisited docker images and push them to the cluster on your local machine.

Now, create a project on the cluster that you created that corresponds to the project you have created on local using the following command:

flytectl create project \
--id "my-project" \
--labels "my-label=my-project" \
--description "My Flyte project" \
--name "My project"

# You can make changes to the names accordingly

You will see output as below:

Once you have done that, navigate to the workflows directory of your Flyte project using:
cd my_project/workflows

Now, the final step is to run the workflow on the remote (your localhost) so you can see the dashboard with task status, using the following command:

pyflyte run \
--remote \
-p my-project \
-d development \
example.py movie_recommendation

Post running this command, you will see a localhost url in the output. Go to that url which points towards the Flyte dashboard created for your project where you can monitor the workflow.

Figure 3: Running flyte worflow on local cluster

When you go to the link, you will find a dashboard created for you like in Figure 4, where you can observe your model pipeline workflow and task status.

Figure 4: Flyte Dashboard

The execution of the tasks will take some time, so initially you might see all tasks as pending and you will need to wait for some time for the Flyte workflow to execute all the tasks on remote.

You can click on the Graph tab to see the below output that shows how your tasks occur in the pipeline and their dependencies.

Figure 5: Workflow graph denoting your tasks and their status with other information

In the Nodes tab, you can click on the recommend_movies task to see your model output, similar to the output when we ran the workflow in local python environment.

Figure 6: Our model output in the `recommend_movies` task with a list of 2 movie_ids

Congratulations! You have successfully set up an ML pipeline, containerized it and set up a dashboard for monitoring the same.

Strengths and Limitations of Flyte Monitoring Tool

Strengths:

  1. Comprehensive Monitoring Capabilities: Flyte offers a wide range of monitoring features, including customizable dashboards, alerting mechanisms, and model drift detection, providing users with a holistic view of model performance and health.
  2. Integration with ML Lifecycle: Flyte seamlessly integrates with other stages of the ML lifecycle, such as model training and deployment, facilitating a cohesive workflow from development to production. This integration ensures consistency and ease of use for ML practitioners.
  3. Scalability and Flexibility: As a cloud-native platform, Flyte is designed to scale effortlessly with the growing demands of ML workflows. It supports various cloud environments and can adapt to changing requirements, making it suitable for both small-scale projects and enterprise-level deployments.
  4. Open-Source Community Support: Being open-source, Flyte benefits from a vibrant community of developers and contributors who actively contribute to its development and improvement. This community support fosters innovation and ensures ongoing enhancements to the platform.
  5. Ease of Use: Flyte prioritizes user experience, offering intuitive interfaces and comprehensive documentation to facilitate easy adoption and usage. Its user-friendly design reduces the learning curve for new users and accelerates the onboarding process.

Limitations:

  1. Learning Curve for Advanced Features: While Flyte’s basic features are easy to use, mastering its more advanced capabilities, such as custom metrics tracking and anomaly detection configuration, may require additional time and expertise. Users may need to invest in training or seek assistance from experienced practitioners.
  2. Resource Requirements: Deploying and maintaining Flyte in a production environment may entail significant resource requirements, including infrastructure costs and administrative overhead. Organizations need to assess their resources and capabilities to ensure smooth deployment and operation of the platform.
  3. Limited Support for Legacy Systems: Flyte is primarily designed for cloud-native environments and may have limited support for legacy systems or on-premises infrastructure. Organizations relying heavily on legacy technologies may encounter compatibility issues or require additional integration efforts.
  4. Community Support Dependency: While community support is a strength, it also introduces a level of dependency on the availability and responsiveness of the community. Users may encounter delays in receiving support or resolving issues if the community resources are limited or overwhelmed.
  5. Regulatory Compliance Challenges: While Flyte facilitates compliance with regulatory standards through its monitoring capabilities, organizations operating in highly regulated industries may still face challenges in ensuring full compliance. Additional customization and configuration may be required to meet specific regulatory requirements.

Overall, Flyte’s strengths in comprehensive monitoring, integration, scalability, community support, and ease of use make it a valuable tool for ML practitioners. However, users should be mindful of its limitations and consider their specific requirements and constraints when evaluating its suitability for their use cases.

Conclusion

In today’s fast-paced world of machine learning, effective monitoring is essential for maintaining the reliability and performance of deployed models. With its robust monitoring capabilities, Flyte emerges as a powerful tool for ML practitioners, offering a comprehensive solution to track model performance, detect anomalies, and ensure compliance with regulatory standards. By incorporating Flyte into your ML workflow, you can streamline the monitoring process and unlock new levels of efficiency and reliability in your machine learning endeavors.

References

--

--

Tirth Jain
Tirth Jain

Written by Tirth Jain

0 Followers

A data science enthusiast, a philomath, an avid learner who loves to explore new things.