Add bisenetv2. My implementation of BiSeNet
-
Updated
Dec 19, 2024 - Python
Add bisenetv2. My implementation of BiSeNet
Serving Inside Pytorch
ClearML - Model-Serving Orchestration and Repository Solution
Deploy DL/ ML inference pipelines with minimal extra code.
Анализ трафика на круговом движении с использованием компьютерного зрения
Build Recommender System with PyTorch + Redis + Elasticsearch + Feast + Triton + Flask. Vector Recall, DeepFM Ranking and Web Application.
Tiny configuration for Triton Inference Server
Set up CI in DL/ cuda/ cudnn/ TensorRT/ onnx2trt/ onnxruntime/ onnxsim/ Pytorch/ Triton-Inference-Server/ Bazel/ Tesseract/ PaddleOCR/ NVIDIA-docker/ minIO/ Supervisord on AGX or PC from scratch.
Provides an ensemble model to deploy a YoloV8 ONNX model to Triton
Advanced inference pipeline using NVIDIA Triton Inference Server for CRAFT Text detection (Pytorch), included converter from Pytorch -> ONNX -> TensorRT, Inference pipelines (TensorRT, Triton server - multi-format). Supported model format for Triton inference: TensorRT engine, Torchscript, ONNX
Generate Glue Code in seconds to simplify your Nvidia Triton Inference Server Deployments
Serving Example of CodeGen-350M-Mono-GPTJ on Triton Inference Server with Docker and Kubernetes
End-to-end MLOps architecture built for polyp segmentation — featuring distributed Ray training, MLflow experiment tracking, and automated CI/CD with Kubeflow Pipelines and KServe (Triton) deployment on Google Kubernetes Engine.
FastAPI middleware for comparing different ML model serving approaches
Provides an ensemble model to deploy a YOLOv8 TensorRT model to Triton
The Purpose of this repository is to create a DeepStream/Triton-Server sample application that utilizes yolov7, yolov7-qat, yolov9 models to perform inference on video files or RTSP streams.
Horizontal Pod Autoscaler (HPA) project on Kubernetes using NVIDIA Triton Inference Server with an AI model
Python wrapper class for OpenVINO Model Server. User can submit inference request to OVMS with just a few lines of code.
Add a description, image, and links to the triton-inference-server topic page so that developers can more easily learn about it.
To associate your repository with the triton-inference-server topic, visit your repo's landing page and select "manage topics."