Home Documentation
Developer Resources

Platform Documentation

Everything you need to integrate, configure, and get the most out of VisionAI. From quick-start guides to advanced API references, find the resources that match your experience level and goals.

🚀 Quick Start Paths

Choose a path based on your role and objectives. Each track provides a focused sequence of guides tailored to get you productive in the shortest time possible.

Getting Started

Set up your account, connect your first data source, and run your initial analysis within 30 minutes. This guide walks through account creation, workspace configuration, and the essentials of the platform dashboard. You will learn how to navigate the main interface, understand your data pipeline status, and verify that connections are working correctly before moving to more advanced configurations.

Estimated time: 30 minutes
Beginner

API Reference

Complete REST API documentation with endpoints for data ingestion, model management, predictions, and automation triggers. Every endpoint includes request/response examples, authentication headers, rate limit details, and error codes. Our API follows OpenAPI 3.0 specification, and interactive documentation is available directly in your workspace for testing endpoints against sandbox data.

Reference material
Intermediate

Integration Guides

Step-by-step instructions for connecting VisionAI to your existing tech stack. Covers database connectors (PostgreSQL, MySQL, MongoDB, Snowflake, BigQuery), cloud storage (AWS S3, Google Cloud Storage, Azure Blob), CRM systems (Salesforce, HubSpot), and business intelligence tools (Tableau, Power BI, Looker). Each guide includes authentication setup, field mapping, and troubleshooting for common connection issues.

Varies by integration
Intermediate

Data Connectors

Detailed documentation for every supported data source. Learn about schema auto-detection, data type mapping, incremental sync versus full refresh options, and scheduling configurations. Our connectors handle format conversion, encoding detection, and null value management automatically. This section also covers custom connector development using our Connector SDK for proprietary data sources not covered by built-in options.

Estimated time: 1-2 hours
Intermediate

Machine Learning Models

Understand how VisionAI selects, trains, and deploys machine learning models on your data. This section covers the AutoML pipeline, feature engineering options, model evaluation metrics, hyperparameter configuration, and the retraining schedule system. Advanced users can access custom model upload functionality to deploy their own scikit-learn, TensorFlow, or PyTorch models within the platform infrastructure.

Estimated time: 2-3 hours
Advanced

Security & Compliance

Comprehensive overview of our security architecture, encryption standards, access control models, and compliance certifications. Includes instructions for configuring role-based access controls (RBAC), setting up SSO with SAML 2.0 or OIDC providers, enabling audit logging, and managing data residency preferences. This section also provides downloadable compliance reports for SOC 2 Type II, ISO 27001, and GDPR readiness assessments.

Reference material
All Levels

Platform Overview

VisionAI is a cloud-based platform built on a microservices architecture designed for high availability and horizontal scaling. The system processes data through a series of configurable pipeline stages: ingestion, cleaning, transformation, analysis, and output. Each stage operates independently, meaning you can customize or bypass specific steps depending on your requirements.

The platform consists of four core modules that work together seamlessly. The Data Engine handles all connections, schema management, and transformation logic. The ML Engine runs model selection, training, evaluation, and deployment. The Automation Engine manages rule-based and AI-triggered workflows. The Presentation Layer powers dashboards, alerts, and report generation.

Architecture Highlights

  • Event-driven pipeline with message queue (Apache Kafka) for reliable data flow
  • Auto-scaling compute nodes for ML workloads using Kubernetes orchestration
  • Multi-region deployment options with data residency controls (EU, US, APAC)
  • 99.7% uptime SLA backed by redundant infrastructure and automatic failover
  • End-to-end encryption with AES-256 at rest and TLS 1.3 in transit
VisionAI platform architecture diagram showing data pipeline microservices

Authentication

VisionAI uses OAuth 2.0 with Bearer tokens for API authentication. Each workspace generates unique API keys that can be scoped to specific permissions and expiration periods. For production deployments, we recommend using service account credentials with the minimum required permission set.

Generating API Keys

Navigate to your workspace Settings panel and select the "API Keys" tab. Click "Create New Key" and assign a descriptive name, permission scope, and optional expiration date. The full key is displayed only once during creation, so store it securely in your environment variables or secrets manager. If a key is compromised, revoke it immediately from the same settings panel.

curl -X GET https://api.visionai-platform.com/v2/datasets \
  -H "Authorization: Bearer YOUR_API_KEY" \
  -H "Content-Type: application/json" \
  -H "X-Workspace-ID: ws_abc123"

SSO Configuration

Enterprise plans support Single Sign-On via SAML 2.0 and OpenID Connect (OIDC). To configure SSO, provide your identity provider metadata URL, entity ID, and certificate in the Authentication settings. VisionAI supports integration with Okta, Azure Active Directory, Google Workspace, and any SAML 2.0 compliant provider. Users authenticated through SSO inherit role mappings based on group attributes defined in your IdP configuration.

Rate Limits

API rate limits vary by plan tier. Starter plans allow 100 requests per minute, Professional plans support 500 requests per minute, and Enterprise plans have configurable limits up to 5,000 requests per minute. Rate limit headers are included in every API response:

X-RateLimit-Limit: 500
X-RateLimit-Remaining: 487
X-RateLimit-Reset: 1706832000

Data Ingestion

The Data Engine accepts structured, semi-structured, and unstructured data from over 120 sources. Ingestion can be configured as one-time imports, scheduled batch jobs, or continuous streaming feeds. The system automatically detects schemas, infers data types, and handles encoding conversion for files originating from different systems.

Supported Data Sources

Databases

  • PostgreSQL (v10+)
  • MySQL / MariaDB (v5.7+)
  • MongoDB (v4.0+)
  • Microsoft SQL Server (2016+)
  • Oracle Database (12c+)
  • Amazon Redshift

Cloud Data Warehouses

  • Google BigQuery
  • Snowflake
  • Amazon S3 (CSV, JSON, Parquet)
  • Google Cloud Storage
  • Azure Blob Storage
  • Databricks Delta Lake

File Formats

  • CSV / TSV
  • JSON / NDJSON
  • Apache Parquet
  • Apache Avro
  • Excel (.xlsx, .xls)
  • XML

Applications & APIs

  • Salesforce
  • HubSpot
  • Stripe
  • Shopify
  • Google Analytics 4
  • Custom REST/GraphQL endpoints

Ingestion via API

Push data directly to VisionAI using our ingestion endpoint. The API accepts JSON payloads up to 10 MB per request. For larger volumes, use our bulk upload endpoint which supports multipart file transfers up to 50 GB. Both endpoints return a job ID for tracking ingestion progress.

POST /v2/datasets/{dataset_id}/records

{
  "records": [
    {
      "timestamp": "2026-01-15T14:30:00Z",
      "metric": "page_views",
      "value": 14523,
      "segment": "organic_search"
    }
  ],
  "options": {
    "dedup_key": "timestamp",
    "on_conflict": "update"
  }
}

Analysis & Models

Once data is ingested, the ML Engine processes it through several configurable stages. Automatic data profiling creates statistical summaries of each column, flagging missing values, outliers, and distribution characteristics. Feature engineering pipelines transform raw fields into model-ready features using encoding, normalization, binning, and interaction generation techniques.

AutoML Pipeline

The AutoML system evaluates multiple algorithm families (gradient boosting, random forests, linear models, neural networks) against your data and optimization target. It runs parallel training jobs with different hyperparameter configurations, using cross-validation to select the best performer. The entire process typically completes within 15 to 45 minutes depending on dataset size. Results include accuracy metrics, feature importance rankings, and a comparison table of all tested models.

Supported Model Types

Classification (binary & multi-class)
Regression (linear & non-linear)
Time series forecasting
Clustering & segmentation
Anomaly detection
Natural language processing
Recommendation engines
Custom model deployment

Model Deployment & Monitoring

Deployed models run in isolated containers with auto-scaling based on prediction request volume. The monitoring dashboard tracks prediction latency, throughput, data drift, and accuracy degradation over time. When the system detects significant drift between training data and incoming production data, it triggers an automatic retraining workflow using the latest records. You can configure approval gates that require manual sign-off before a retrained model replaces the current production version.

Automation

The Automation Engine lets you create workflows that execute actions based on data conditions, schedule triggers, or model predictions. Workflows are built using a visual drag-and-drop editor that chains together input sources, condition nodes, action steps, and output destinations. Every workflow execution is logged with full audit trails, including input data, decision paths taken, and results produced.

Trigger Types

Schedule-Based

Run workflows on cron schedules: hourly, daily, weekly, or custom intervals. Supports timezone configuration and skip-on-holiday rules for business calendar alignment.

Threshold Alerts

Define numeric or categorical thresholds on any monitored metric. When values cross the boundary, the workflow fires within 30 seconds. Supports compound conditions with AND/OR logic.

Webhook / Event-Driven

Receive incoming webhooks from external systems or internal pipeline events. Parse JSON payloads and route data through conditional branches based on event type and content.

ML Prediction-Based

Trigger actions when a deployed model produces predictions meeting specific criteria (e.g., churn probability exceeds 0.8, anomaly score above threshold). Enables proactive response workflows.

Available Actions

Workflows can execute over 40 built-in action types including sending email or Slack notifications, updating database records, calling external APIs, generating PDF reports, creating support tickets in Jira or Zendesk, pushing data to CRM systems, and triggering retraining jobs. Custom action scripts written in Python can be uploaded for specialized operations. Each action supports retry logic with configurable backoff intervals and failure notification routing.

Dashboards & Reports

The Presentation Layer provides a flexible dashboard builder with over 50 visualization widget types. Dashboards refresh in real-time or on configurable intervals, pulling live data from your connected sources and model outputs. Multiple dashboards can be created per workspace, each tailored to different audiences such as executive summaries, operational monitoring, or detailed analytical exploration.

Widget Types

Line & area charts
Bar & column charts
Pie & donut charts
Scatter & bubble plots
Heatmaps
Geographic maps
Data tables
Funnel charts
KPI scorecards

Scheduled Reports

Configure automated report delivery in PDF, CSV, or Excel format. Reports can be sent via email to specified recipients on daily, weekly, or monthly schedules. Each report snapshots the current dashboard state, applies any saved filters, and includes a timestamp for audit purposes. Report templates support custom branding with your company logo, color scheme, and footer text. Distribution lists can be managed per report, with individual recipients able to unsubscribe from non-essential deliveries.

SDK Libraries

Official client libraries simplify API interaction by handling authentication, request formatting, pagination, retry logic, and type-safe response parsing. All SDKs are open-source, published to their respective package registries, and maintained by our engineering team with regular releases aligned to API version updates.

Python SDK

v3.2.1
pip install visionai-sdk

from visionai import VisionClient

client = VisionClient(api_key="YOUR_API_KEY")
dataset = client.datasets.get("ds_revenue_2026")
predictions = client.models.predict(
    model_id="mdl_churn_v4",
    data=dataset.sample(1000)
)
print(predictions.summary())

Node.js SDK

v2.8.0
npm install @visionai/sdk

const { VisionClient } = require('@visionai/sdk');

const client = new VisionClient({ apiKey: 'YOUR_API_KEY' });
const results = await client.datasets.query('ds_revenue_2026', {
  filters: { segment: 'enterprise' },
  limit: 500
});
const insights = await client.analyze(results);

Java SDK

v1.5.3
// Maven dependency
<dependency>
  <groupId>com.visionai</groupId>
  <artifactId>visionai-sdk</artifactId>
  <version>1.5.3</version>
</dependency>

VisionClient client = VisionClient.builder()
    .apiKey("YOUR_API_KEY")
    .build();
Dataset dataset = client.datasets().get("ds_revenue_2026");

Webhooks & Events

VisionAI emits events at key points in the data pipeline and model lifecycle. You can subscribe to these events by registering webhook endpoints in your workspace settings. Each event delivery includes a cryptographic signature (HMAC-SHA256) that your server should verify to confirm the payload originated from VisionAI.

Available Event Types

Event Description Category
dataset.createdNew dataset registered in workspaceData
dataset.sync.completedScheduled data sync finished successfullyData
dataset.sync.failedData sync encountered an errorError
model.training.completedModel training finished, results availableML
model.drift.detectedSignificant data drift flagged on production modelAlert
automation.executedWorkflow completed executionAutomation
alert.threshold.breachedMonitored metric exceeded configured boundaryAlert

Webhook Payload Example

{
  "event": "model.training.completed",
  "timestamp": "2026-01-15T09:23:41Z",
  "workspace_id": "ws_abc123",
  "data": {
    "model_id": "mdl_churn_v4",
    "accuracy": 0.923,
    "f1_score": 0.891,
    "training_duration_seconds": 847,
    "dataset_rows": 245000,
    "status": "ready_for_deployment"
  },
  "signature": "sha256=a1b2c3d4e5f6..."
}

Troubleshooting

Common issues and their resolutions. If you encounter a problem not covered here, contact our support team through the contact page or use the in-app chat available in your workspace dashboard.

API Status

All Systems Operational

Current API version: v2.14. Check the status page for real-time uptime monitoring and incident history across all service endpoints.

Changelog

Stay current with platform updates, new features, bug fixes, and deprecation notices. We publish release notes with every deployment.

Latest: v2.14 (January 10, 2026)

Support

Technical support is available through in-app chat, email, and scheduled calls. Enterprise plans include a dedicated account engineer.

Contact Support

Ready to Build Something Powerful?

Start your free 14-day trial and explore the full API with sandbox data. Our onboarding team is available to guide you through your first integration.