Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
Original file line number Diff line number Diff line change
@@ -1,9 +1,9 @@
# Airflow

Airflow is a platform to programmatically author, schedule and monitor workflows. Use airflow to author workflows as directed acyclic graphs (DAGs) of tasks. The airflow scheduler executes your tasks on an array of workers while following the specified dependencies. Rich command line utilities make performing complex surgeries on DAGs a snap. The rich user interface makes it easy to visualize pipelines running in production, monitor progress, and troubleshoot issues when needed. When workflows are defined as code, they become more maintainable, versionable, testable, and collaborative.
Airflow is a platform to programmatically author, schedule, and monitor workflows. Use airflow to author workflows as directed acyclic graphs (DAGs) of tasks. The airflow scheduler executes your tasks on an array of workers while following the specified dependencies. Rich command-line utilities make performing complex surgeries on DAGs a snap. The rich user interface makes it easy to visualize pipelines running in production, monitor progress, and troubleshoot issues when needed. When workflows are defined as code, they become more maintainable, versionable, testable, and collaborative.

Visit the following resources to learn more:

- [@official@Airflow](https://airflow.apache.org/)
- [@official@Airflow Documentation](https://airflow.apache.org/docs)
- [@feed@Explore top posts about Apache Airflow](https://app.daily.dev/tags/apache-airflow?ref=roadmapsh)
- [@feed@Explore top posts about Apache Airflow](https://app.daily.dev/tags/apache-airflow?ref=roadmapsh)
Original file line number Diff line number Diff line change
@@ -1,11 +1,11 @@
# AWS / Azure / GCP

AWS (Amazon Web Services), Azure and GCP (Google Cloud Platform) are three leading providers of cloud computing services. AWS by Amazon is the oldest and the most established among the three, providing a breadth and depth of solutions ranging from infrastructure services like compute, storage, and databases to the machine and deep learning. Azure, by Microsoft, has integrated tools for DevOps, supports a large number of programming languages, and offers seamless integration with on-prem servers and Microsoft’s software. Google's GCP has strength in cost-effectiveness, live migration of virtual machines, and flexible computing options. All three have introduced various MLOps tools and services to boost capabilities for machine learning development and operations.
AWS (Amazon Web Services), Azure, and GCP (Google Cloud Platform) are three leading providers of cloud computing services. AWS by Amazon is the oldest and the most established among the three, providing a breadth and depth of solutions ranging from infrastructure services like compute, storage, and databases to machine learning and deep learning. Azure, by Microsoft, has integrated tools for DevOps, supports a large number of programming languages, and offers seamless integration with on-prem servers and Microsoft’s software. Google's GCP has strength in cost-effectiveness, live migration of virtual machines, and flexible computing options. All three have introduced various MLOps tools and services to boost capabilities for machine learning development and operations.

Visit the following resources to learn more about AWS, Azure, and GCP:
Visit the following resources to learn more:

- [@roadmap.sh@Visit Dedicated AWS Roadmap](https://roadmap.sh/aws)
- [@roadmap@Visit Dedicated AWS Roadmap](https://roadmap.sh/aws)
- [@official@Microsoft Azure](https://docs.microsoft.com/en-us/learn/azure/)
- [@official@Google Cloud Platform](https://cloud.google.com/)
- [@official@GCP Learning Resources](https://cloud.google.com/training)
- [@feed@Explore top posts about AWS](https://app.daily.dev/tags/aws?ref=roadmapsh)
- [@feed@Explore top posts about AWS](https://app.daily.dev/tags/aws?ref=roadmapsh)
5 changes: 3 additions & 2 deletions src/data/roadmaps/mlops/content/bash@mMzqJF2KQ49TDEk5F3VAI.md
Original file line number Diff line number Diff line change
Expand Up @@ -2,8 +2,9 @@

Bash (Bourne Again Shell) is a Unix shell and command language used for interacting with the operating system through a terminal. It allows users to execute commands, automate tasks via scripting, and manage system operations. As the default shell for many Linux distributions, it supports command-line utilities, file manipulation, process control, and text processing. Bash scripts can include loops, conditionals, and functions, making it a powerful tool for system administration, automation, and task scheduling.

Learn more from the following resources:
Visit the following resources to learn more:

- [@article@Bash Reference Manual](https://www.gnu.org/software/bash/manual/bashref.html)
- [@roadmap@Visit the Dedicated Shell-Bash Roadmap](https://roadmap.sh/shell-bash)
- [@opensource@bash-guide](https://github.com/Idnan/bash-guide)
- [@article@Bash Reference Manual](https://www.gnu.org/software/bash/manual/bashref.html)
- [@video@Bash Scripting Course](https://www.youtube.com/watch?v=tK9Oc6AEnR4)
6 changes: 3 additions & 3 deletions src/data/roadmaps/mlops/content/cicd@a6vawajw7BpL6plH_nuAz.md
Original file line number Diff line number Diff line change
@@ -1,8 +1,8 @@
# CI / CD
# CI/CD

CI/CD (Continuous Integration and Continuous Deployment/Delivery) is a software development practice that automates the process of integrating code changes, running tests, and deploying updates. Continuous Integration focuses on regularly merging code changes into a shared repository, followed by automated testing to ensure code quality. Continuous Deployment extends this by automatically releasing every validated change to production, while Continuous Delivery ensures code is always in a deployable state, but requires manual approval for production releases. CI/CD pipelines improve code reliability, reduce integration risks, and speed up the development lifecycle.
CI/CD, which stands for Continuous Integration and Continuous Delivery/Deployment, is a software development practice that automates the process of building, testing, and deploying code changes. Continuous Integration focuses on frequently merging code changes into a central repository, followed by automated builds and tests. Continuous Delivery/Deployment then automates the release of these validated code changes to a staging or production environment.

Learn more from the following resources:
Visit the following resources to learn more:

- [@article@What is CI/CD? - GitLab](https://about.gitlab.com/topics/ci-cd/)
- [@article@What is CI/CD? - Redhat](https://www.redhat.com/en/topics/devops/what-is-ci-cd)
Expand Down
Original file line number Diff line number Diff line change
@@ -1,9 +1,9 @@
# Cloud Computing

**Cloud Computing** refers to the delivery of computing services over the internet rather than using local servers or personal devices. These services include servers, storage, databases, networking, software, analytics, and intelligence. Cloud Computing enables faster innovation, flexible resources, and economies of scale. There are various types of cloud computing such as public clouds, private clouds, and hybrids clouds. Furthermore, it's divided into different services like Infrastructure as a Service (IaaS), Platform as a Service (PaaS), and Software as a Service (SaaS). These services differ mainly in the level of control an organization has over their data and infrastructures.
**Cloud Computing** refers to the delivery of computing services over the internet rather than using local servers or personal devices. These services include servers, storage, databases, networking, software, analytics, and intelligence. Cloud Computing enables faster innovation, flexible resources, and economies of scale. There are various types of cloud computing, such as public clouds, private clouds, and hybrid clouds. Furthermore, it's divided into different services like Infrastructure as a Service (IaaS), Platform as a Service (PaaS), and Software as a Service (SaaS). These services differ mainly in the level of control an organization has over its data and infrastructures.

Learn more from the following resources:
Visit the following resources to learn more:

- [@article@Cloud Computing - IBM](https://www.ibm.com/think/topics/cloud-computing)
- [@article@What is Cloud Computing? - Azure](https://azure.microsoft.com/en-gb/resources/cloud-computing-dictionary/what-is-cloud-computing)
- [@video@What is Cloud Computing? - Amazon Web Services](https://www.youtube.com/watch?v=mxT233EdY5c)
- [@video@What is Cloud Computing? - Amazon Web Services](https://www.youtube.com/watch?v=mxT233EdY5c)
Original file line number Diff line number Diff line change
@@ -1,9 +1,9 @@
# Cloud-native ML Services
# Cloud-Native ML Services

Most of the cloud providers offer managed services for machine learning. These services are designed to help data scientists and machine learning engineers to build, train, and deploy machine learning models at scale. These services are designed to be cloud-native, meaning they are designed to work with other cloud services and are optimized for the cloud environment.
Cloud-native ML services are pre-built machine learning tools and platforms offered by cloud providers. These services allow users to build, train, and deploy machine learning models without managing the underlying infrastructure. They often include features like automated model training, scalable deployment options, and integration with other cloud services.

Learn more from the following resources:
Visit the following resources to learn more:

- [@official@AWS Sage Maker](https://aws.amazon.com/sagemaker/)
- [@official@Azure ML](https://azure.microsoft.com/en-gb/products/machine-learning)
- [@video@What is Cloud Native?](https://www.youtube.com/watch?v=fp9_ubiKqFU)
- [@video@What is Cloud Native?](https://www.youtube.com/watch?v=fp9_ubiKqFU)
Original file line number Diff line number Diff line change
@@ -1,13 +1,11 @@
# Containers
# Containerization

Containers are a construct in which cgroups, namespaces, and chroot are used to fully encapsulate and isolate a process. This encapsulated process, called a container image, shares the kernel of the host with other containers, allowing containers to be significantly smaller and faster than virtual machines.

These images are designed for portability, allowing for full local testing of a static image, and easy deployment to a container management platform.
Containerization is a form of operating system virtualization that packages an application and its dependencies into a single, isolated unit called a container. This container includes everything the application needs to run, such as code, runtime, system tools, libraries, and settings. Containers offer a consistent and portable environment for applications, ensuring they run the same way regardless of where they are deployed.

Visit the following resources to learn more:

- [@article@What are Containers? - Google Cloud](https://cloud.google.com/learn/what-are-containers)
- [@article@What is a Container? - Docker](https://www.docker.com/resources/what-container/)
- [@video@What are Containers?](https://www.youtube.com/playlist?list=PLawsLZMfND4nz-WDBZIj8-nbzGFD4S9oz)
- [@article@Articles about Containers - The New Stack](https://thenewstack.io/category/containers/)
- [@feed@Explore top posts about Containers](https://app.daily.dev/tags/containers?ref=roadmapsh)
- [@video@What are Containers?](https://www.youtube.com/playlist?list=PLawsLZMfND4nz-WDBZIj8-nbzGFD4S9oz)
- [@feed@Explore top posts about Containers](https://app.daily.dev/tags/containers?ref=roadmapsh)
Original file line number Diff line number Diff line change
Expand Up @@ -2,7 +2,8 @@

Data Engineering is essentially dealing with the collection, validation, storage, transformation, and processing of data. The objective is to provide reliable, efficient, and scalable data pipelines and infrastructure that allow data scientists to convert data into actionable insights. It involves steps like data ingestion, data storage, data processing, and data provisioning. Important concepts include designing, building, and maintaining data architecture, databases, processing systems, and large-scale processing systems. It is crucial to have extensive technical knowledge in various tools and programming languages like SQL, Python, Hadoop, and more.

Learn more from the following resources:
Visit the following resources to learn more:

- [@roadmap@Visit the Dedicated Data Engineer Roadmap](https://roadmap.sh/data-engineer)
- [@article@Data Engineering 101](https://www.redpanda.com/guides/fundamentals-of-data-engineering)
- [@video@Fundamentals of Data Engineering](https://www.youtube.com/watch?v=mPSzL8Lurs0)
Original file line number Diff line number Diff line change
@@ -1,10 +1,10 @@
# Data Ingestion Architectures

Data ingestion is the process of collecting, transferring, and loading data from various sources to a destination where it can be stored and analyzed. There are several data ingestion architectures that can be used to collect data from different sources and load it into a data warehouse, data lake, or other storage systems. These architectures can be broadly classified into two categories: batch processing and real-time processing. How you choose to ingest data will depend on the volume, velocity, and variety of data you are working with, as well as the latency requirements of your use case.
Data ingestion is the process of collecting, transferring, and loading data from various sources to a destination where it can be stored and analyzed. Several data ingestion architectures can be used to collect data from different sources and load it into a data warehouse, data lake, or other storage systems. These architectures can be broadly classified into two categories: batch processing and real-time processing. How you choose to ingest data will depend on the volume, velocity, and variety of data you are working with, as well as the latency requirements of your use case.

Lambda and Kappa architectures are two popular data ingestion architectures that combine batch and real-time processing to handle large volumes of data efficiently.

Learn more from the following resources:
Visit the following resources to learn more:

- [@article@Data Ingestion Patterns](https://docs.aws.amazon.com/whitepapers/latest/aws-cloud-data-ingestion-patterns-practices/data-ingestion-patterns.html)
- [@video@What is a data pipeline?](https://www.youtube.com/watch?v=kGT4PcTEPP8)
Original file line number Diff line number Diff line change
@@ -1,9 +1,9 @@
# Data lakes & Warehouses
# Data Lakes & Warehouses

**Data Lakes** are large-scale data repository systems that store raw, untransformed data, in various formats, from multiple sources. They're often used for big data and real-time analytics requirements. Data lakes preserve the original data format and schema which can be modified as necessary. On the other hand, **Data Warehouses** are data storage systems which are designed for analyzing, reporting and integrating with transactional systems. The data in a warehouse is clean, consistent, and often transformed to meet wide-range of business requirements. Hence, data warehouses provide structured data but require more processing and management compared to data lakes.
Data lakes and data warehouses are both systems for storing large amounts of data, but they differ in structure and purpose. A data lake stores data in its raw, unprocessed format, allowing for flexibility in analysis and exploration. A data warehouse, on the other hand, stores data that has been structured and transformed for specific analytical purposes, often optimized for querying and reporting.

Learn more from the following resources:
Visit the following resources to learn more:

- [@article@Data Lake Definition](https://azure.microsoft.com/en-gb/resources/cloud-computing-dictionary/what-is-a-data-lake)
- [@video@What is a Data Lake?](https://www.youtube.com/watch?v=LxcH6z8TFpI)
- [@video@@hat is a Data Warehouse?](https://www.youtube.com/watch?v=k4tK2ttdSDg)
- [@video@What is a Data Warehouse?](https://www.youtube.com/watch?v=k4tK2ttdSDg)
Original file line number Diff line number Diff line change
@@ -1,8 +1,8 @@
# Data Lineage and Feature Stores

**Data Lineage** refers to the life-cycle of data, including its origins, movements, characteristics and quality. It's a critical component in MLOps for tracking the journey of data through every process in a pipeline, from raw input to model output. Data lineage helps in maintaining transparency, ensuring compliance, and facilitating data debugging or tracing data related bugs. It provides a clear representation of data sources, transformations, and dependencies thereby aiding in audits, governance, or reproduction of machine learning models.
**Data Lineage** refers to the life-cycle of data, including its origins, movements, characteristics and quality. It's a critical component in MLOps for tracking the journey of data through every process in a pipeline, from raw input to model output. Data lineage helps in maintaining transparency, ensuring compliance, and facilitating data debugging or tracing data-related bugs. It provides a clear representation of data sources, transformations, and dependencies, thereby aiding in audits, governance, or reproduction of machine learning models.

Learn more from the following resources:
Visit the following resources to learn more:

- [@article@What is Data Lineage?](https://www.ibm.com/topics/data-lineage)
- [@article@What is a Feature Store](https://www.snowflake.com/guides/what-feature-store-machine-learning/)
- [@article@What is a Feature Store](https://www.snowflake.com/guides/what-feature-store-machine-learning/)
Original file line number Diff line number Diff line change
Expand Up @@ -2,7 +2,7 @@

Data pipelines are a series of automated processes that transport and transform data from various sources to a destination for analysis or storage. They typically involve steps like data extraction, cleaning, transformation, and loading (ETL) into databases, data lakes, or warehouses. Pipelines can handle batch or real-time data, ensuring that large-scale datasets are processed efficiently and consistently. They play a crucial role in ensuring data integrity and enabling businesses to derive insights from raw data for reporting, analytics, or machine learning.

Learn more from the following resources:
Visit the following resources to learn more:

- [@article@What is a Data Pipeline? - IBM](https://www.ibm.com/topics/data-pipeline)
- [@video@What are Data Pipelines?](https://www.youtube.com/watch?v=oKixNpz6jNo)
Original file line number Diff line number Diff line change
@@ -1,11 +1,11 @@
# Docker

Docker is a platform for working with containerized applications. Among its features are a daemon and client for managing and interacting with containers, registries for storing images, and a desktop application to package all these features together.
Docker is a platform that uses operating system-level virtualization to deliver software in packages called containers. These containers isolate software from its environment and ensure that it works uniformly despite differences between development and production environments. Docker simplifies the process of building, shipping, and running applications by packaging all dependencies, libraries, and configurations into a single unit.

Visit the following resources to learn more:

- [@roadmap@Visit Dedicated Docker Roadmap](https://roadmap.sh/docker)
- [@official@Docker Documentation](https://docs.docker.com/)
- [@video@Docker Tutorial](https://www.youtube.com/watch?v=RqTEHSBrYFw)
- [@video@Docker Simplified in 55 Seconds](https://youtu.be/vP_4DlOH1G4)
- [@feed@Explore top posts about Docker](https://app.daily.dev/tags/docker?ref=roadmapsh)
- [@feed@Explore top posts about Docker](https://app.daily.dev/tags/docker?ref=roadmapsh)
Original file line number Diff line number Diff line change
Expand Up @@ -2,7 +2,7 @@

**Experiment Tracking** is an essential part of MLOps, providing a system to monitor and record the different experiments conducted during the machine learning model development process. This involves capturing, organizing and visualizing the metadata associated with each experiment, such as hyperparameters used, models produced, metrics like accuracy or loss, and other information about the computational environment. This tracking allows for reproducibility of experiments, comparison across different experiment runs, and helps in identifying the best models.

Learn more from the following resources:
Visit the following resources to learn more:

- [@article@Experiment Tracking](https://madewithml.com/courses/mlops/experiment-tracking/#dashboard)
- [@article@ML Flow Model Registry](https://mlflow.org/docs/latest/model-registry.html)
Loading