Frequently Asked Questions

Answers to the most common questions about OpenRails, organized by topic. Use these to prepare for prospect conversations.

General

OpenRails is an AI-powered enterprise automation platform built by Clarity Ventures. It combines multi-LLM intelligence, advanced RAG (retrieval-augmented generation), and autonomous agent orchestration to help organizations automate complex business processes, manage knowledge, and integrate AI into existing workflows.

OpenRails addresses several enterprise challenges: knowledge scattered across disconnected systems, repetitive manual processes that consume team capacity, difficulty leveraging AI without vendor lock-in, compliance concerns around AI and sensitive data, and the complexity of integrating AI with existing enterprise tools and workflows.

Several factors differentiate OpenRails: multi-LLM support (self-hosted models, OpenAI, Anthropic, Google GenAI) eliminates vendor lock-in; dual RAG architecture (vector + graph) delivers superior retrieval accuracy; on-premise deployment with local AI models enables full data sovereignty; the platform's extensive pipeline library provides deep extensibility; and multiple security tiers with multi-layer encryption meet the needs of regulated industries.

OpenRails serves enterprises across industries that need intelligent automation: technology companies, financial services, healthcare organizations, legal firms, government agencies, and manufacturing companies. Within these organizations, it serves CTOs and technical leadership, operations teams, business users, and compliance officers.

OpenRails can be deployed on-premise (Docker/Kubernetes in your data center), in a private cloud, or in a hybrid configuration. On-premise deployments can run entirely air-gapped with local AI models. Hybrid deployments route sensitive data through local models while using cloud LLMs for non-sensitive workloads.

Clarity Ventures is the company behind OpenRails. They specialize in enterprise software solutions and have built OpenRails to address the growing need for secure, flexible, and extensible AI automation in enterprise environments.

Technical

OpenRails supports four LLM provider families: Self-hosted models (for local/on-premise models like Llama, Mistral, Phi), OpenAI (GPT-4o, GPT-4, GPT-3.5 Turbo), Anthropic (Claude models), and Google GenAI (Gemini). You can configure different models for different tasks and switch providers through configuration without code changes.

OpenRails requires Docker or Kubernetes for container orchestration. The stack includes an AI engine backend, OpenRails application platform, modern web interface, vector database, and relational database. For local LLM inference, a GPU-equipped server is recommended. Specific sizing depends on document volume, concurrent users, and chosen LLM models.

OpenRails is built on containerized microservices that scale horizontally. The AI engine backend, vector database, and agent execution workers can each be scaled independently based on load. Kubernetes deployments support auto-scaling based on CPU, memory, or queue depth metrics.

RAG significantly improves accuracy for enterprise-specific questions by grounding LLM responses in actual organizational data. The dual approach (vector + graph) further improves performance: vector search handles semantic similarity while graph traversal excels at multi-hop reasoning and relationship queries. Retrieval tuning parameters (top-K, similarity threshold, graph depth) let you optimize the accuracy/latency tradeoff.

All data is stored within your deployment: documents and metadata in a relational database, vector embeddings in the vector database, and knowledge graph data in the graph store. For on-premise deployments, everything resides on your infrastructure. There is no external data storage or telemetry. All data at rest is protected using multi-layer encryption.

The backend combines an AI engine (AI/ML processing, RAG pipelines) with the OpenRails platform (enterprise integration, pipelines). The frontend is a modern web interface. Vector storage uses a semantic search engine, graph knowledge uses a knowledge graph engine, and relational data uses a relational database. Real-time communication uses live update notifications.

Security

Sensitive data is protected at multiple layers: multiple security tiers classify content by sensitivity; role-based access controls who can access each resource; PII de-identification pipelines detect and redact personal information; encryption secures data at rest; and TLS 1.2+ protects data in transit. On-premise deployment ensures data never leaves your network.

OpenRails encrypts all data at rest with enterprise-grade encryption. Key rotation is supported without re-encrypting existing data. TLS 1.2+ secures all network communication.

A dedicated PII de-identification pipeline scans ingested content for personally identifiable information (names, emails, phone numbers, SSNs, etc.). Detected PII is classified by type and confidence, then redacted, masked, or tokenized based on your governance policy. All detections and actions are audit-logged.

OpenRails provides the building blocks for compliance: data classification, PII handling, encryption, audit trails, access controls, and on-premise deployment. These capabilities support compliance with frameworks like HIPAA, SOX, and GDPR. The specific compliance posture depends on how the platform is configured and deployed within your organization's broader compliance program.

Yes. When deployed on-premise with self-hosted models for local LLM inference, OpenRails can operate with zero external network access. All components—backend, frontend, vector DB, graph store, and LLM models—run locally. Container images can be transferred via offline media for initial deployment.

Integration

Pre-built connectors include Basecamp, Azure DevOps, OneDrive/SharePoint, SQL Server, MySQL, MongoDB, and other databases, and email (SMTP/IMAP). The MCP protocol and custom REST API connector allow integration with virtually any system that exposes an API.

MCP (Model Context Protocol) is the standardized interface OpenRails uses for tool communication. Built on an open standard protocol, it allows agents to dynamically discover available tools, understand their parameters via JSON Schema, invoke them with structured requests, and handle responses consistently. This enables a plug-and-play tool ecosystem.

Yes. Custom tools can be created by implementing the MCP interface (a standard protocol endpoint with a JSON Schema descriptor). The platform's extensive pipeline library also provides extension points for custom processing logic, authentication adapters, notification channels, and more—all without modifying the core platform.

Yes. Every capability in OpenRails is available through its REST API, exposed across a comprehensive set of API domains. The API covers agents, chat, documents, collections, connectors, evaluation, and administration. Authentication uses JWT tokens with role-based authorization.

Pricing

OpenRails uses an enterprise licensing model based on deployment scope, user count, and support tier. Pricing is customized for each organization based on their specific requirements. Contact the Clarity Ventures sales team for a detailed quote.

The enterprise edition includes advanced features such as advanced security tiers, multi-layer encryption, advanced PII pipelines, multi-node clustering, priority support, and custom pipeline development. The standard edition covers core RAG, chat, agent orchestration, and standard connectors. Your Clarity Ventures account team can detail the specific feature differences.

Three support tiers are available: Standard (business-hours support, email/ticket), Premium (extended hours, priority queue, dedicated support engineer), and Enterprise (24/7 support, named account team, on-site assistance, custom SLAs). All tiers include access to documentation, partner portal resources, and platform updates.