Skip to content
#

ml-deployment

Here are 32 public repositories matching this topic...

jetson-orin-matmul-analysis

Scientific CUDA benchmarking framework: 4 implementations x 3 power modes x 5 matrix sizes on Jetson Orin Nano. 1,282 GFLOPS peak, 90% performance @ 88% power (25W mode), 99.5% accuracy validation, edge AI deployment guide.

  • Updated Oct 14, 2025
  • Python

ml-deploy-lite is a Python library designed to simplify the deployment of machine learning models. It allows developers to quickly turn their models into REST APIs or gRPC services with minimal configuration. The library integrates seamlessly with Docker and Kubernetes, providing built-in monitoring and logging for performance and error tracking.

  • Updated Nov 26, 2024
  • Python

🔍 Analyze CUDA matrix multiplication performance and power consumption on NVIDIA Jetson Orin Nano across multiple implementations and settings.

  • Updated Dec 10, 2025
  • Python

An end-to-end ML model deployment pipeline on GCP: train in Cloud Shell, containerize with Docker, push to Artifact Registry, deploy on GKE, and build a basic frontend to interact through exposed endpoints. This showcases the benefits of containerized deployments, centralized image management, and automated orchestration using GCP tools.

  • Updated Mar 4, 2024
  • Python

🪘 Tabla Drum Image Generator – AI-powered tabla drum image generation using Stable Diffusion & GANs. Features custom dataset curation, ML training pipeline, and scalable API deployment.

  • Updated Mar 28, 2025
  • Python

A full-stack machine learning architecture for food delivery ETA prediction, leveraging a DVC-driven pipeline, automated CI/CD workflows, cloud artifact management, and LGBM-based stacked regression ensemble for high-fidelity time estimations.

  • Updated May 11, 2025
  • Python

Improve this page

Add a description, image, and links to the ml-deployment topic page so that developers can more easily learn about it.

Curate this topic

Add this topic to your repo

To associate your repository with the ml-deployment topic, visit your repo's landing page and select "manage topics."

Learn more