Skip to content

giannistbsnet/maize-datacrop

Repository files navigation

DataCROP Documentation

Static Site Generation

Jekyll Just the Docs Ruby

Language & Tooling

Vue.js Spring Boot

Data & Workflow Platform

Apache Airflow Apache Kafka ELK Stack MongoDB RabbitMQ Keycloak

CI/CD & Hosting

GitHub Actions GitHub Pages

Overview

DataCROP (Data Collection Routing & Processing) is a configurable framework for real-time data collection, transformation, filtering, and management across IoT and cybersecurity domains. It emphasizes interoperability through a specialized data model for sources, processors, and results, enabling flexible workflow-driven analytics.

Versions and Stack Highlights

  • Barley (v1.0): MongoDB, Apache Kafka, RabbitMQ, Kafka Streams, Node.js, React, optional Hyperledger Fabric.
  • Farro (v2.0): Builds on Barley; MongoDB, Apache Kafka, RabbitMQ, Node.js, React, and algorithm support (Java, Python, R).
  • Maize (v3.0, in progress): MongoDB, Apache Kafka, ELK stack; expanding observability and data services.

Demo Environment

Deployable Farro demo: https://github.com/datacrop/farro-demo-deployment-scripts.

Run the Docs Locally

  1. Ensure Ruby and Bundler are installed.
  2. Install dependencies:
    bundle install
    
  3. Serve the site:
    bundle exec jekyll serve
    
  4. Visit http://localhost:4000 (adjust baseurl if configured).

Documentation Structure

  • _content/home/: High-level framework overview and roadmap.
  • _content/overview/: Getting started and authentication with Keycloak.
  • _content/airflow/: Airflow processing engine deployment steps.
  • _content/creating-workflows/, _content/creating-data-models/, _content/worker/: Building workflows, data models, and workers.
  • _content/dev-guide/, _content/editor/, _content/user-guide/: Guidance for developers and end users.

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Contributors 2

  •  
  •