Kora AI Governance
Kora AI Governance is a comprehensive ML model lifecycle management platform that provides model registry, version control, monitoring, drift detection, fairness evaluation, explainability reporting, and A/B experimentation through a unified API. Built for financial institutions, fintechs, and any organization deploying ML models that require auditability, compliance, and responsible AI practices.
What you can do
- Model Registry — Register, version, and catalog all ML models across your organization. Track model metadata, ownership, approval status, and deployment history in a central repository.
- Version Management — Manage model versions with full lineage tracking. Promote versions through staging environments (Development, Staging, Production) with approval workflows.
- Model Monitoring — Monitor deployed models in real time with configurable metric collection. Track prediction latency, throughput, error rates, and custom business metrics.
- Drift Detection — Detect data drift, concept drift, and prediction drift using statistical tests (KS, PSI, Chi-Square, Jensen-Shannon). Configure thresholds and receive alerts when drift exceeds acceptable limits.
- Fairness Evaluation — Evaluate model fairness across protected attributes (gender, ethnicity, age, geography). Compute disparate impact, equalized odds, demographic parity, and other fairness metrics.
- Explainability — Generate model explanations using SHAP values, LIME, feature importance, and partial dependence plots. Produce human-readable explanation reports for regulators and auditors.
- A/B Experiments — Run controlled experiments comparing model versions with configurable traffic splits, statistical significance targets, and guardrail metrics.
How it works
Kora AI Governance integrates with your ML infrastructure to provide end-to-end model lifecycle management.
┌─ ────────────┐ ┌─────────────┐ ┌─────────────┐
│ Your ML │────▶│ AI Gov │────▶│ Your Server │
│ Pipeline │ │ API │ │ (webhook) │
└──────────────┘ └──────┬──────┘ └──────────────┘
│
┌─────────────┼─────────────┐
│ │ │
┌─────▼─────┐ ┌────▼────┐ ┌──────▼──────┐
│ Model │ │ Drift │ │ Fairness │
│ Registry │ │ Monitor │ │ Evaluator │
└────────────┘ └─────────┘ └─────────────┘
- Register models with metadata, ownership, and classification (risk tier, use case, data sensitivity)
- Push model versions with artifacts, training metadata, and performance benchmarks
- Monitor predictions by sending inference logs for real-time metric tracking
- Detect drift with automated statistical tests on feature distributions and prediction outputs
- Evaluate fairness by running bias audits across protected demographic attributes
Who it's for
- ML engineering teams needing a central registry and deployment tracking for models
- Risk and compliance teams requiring auditable model governance with approval workflows
- Data science teams running experiments and tracking model performance over time
- Internal audit teams needing explainability reports and fairness assessments
- Regulators seeking transparency into AI/ML model usage in financial services
Next steps
- Quickstart — Register a model and start monitoring in 5 minutes
- Authentication — Set up API keys and environments
- API Reference — Explore every endpoint