AI-Native Automation Platform

Build AI Workflow Automations Without Writing Code

Visual canvas, multi-agent orchestration, RAG pipelines, and MCP support. The first truly AI-native workflow automation platform.

Heym is a source-available, self-hosted platform for building AI automations on a drag-and-drop canvas. Describe your workflow in natural language and the assistant generates it, or wire it manually with a wide range of built-in nodes, including Telegram, IMAP, and outbound WebSocket nodes. Deploy with Docker Compose or Kubernetes, keep full control of your data, and license commercial redistribution separately when you need it.

38 Node TypesMulti-Agent OrchestrationBuilt-in RAG PipelinesSource Available

Heym is an AI-native low-code automation platform with a visual workflow editor. It supports LLM nodes for text generation and vision, Agent nodes with tool calling and Python tools, Qdrant RAG for semantic search, MCP client and server integration, human-in-the-loop approval checkpoints, content guardrails, parallel DAG execution, a skills system for portable agent capabilities, Playwright browser automation with auto-heal, and integrations with Telegram bots, Slack, IMAP inboxes, outbound WebSocket streams, email, Redis, RabbitMQ, Grist, and any HTTP API. Self-host with Docker Compose or Kubernetes on your own infrastructure.

Integrates with

OpenAI
Anthropic
Ollama
Cerebras
Google Gemini
OpenRouter
Slack
Gmail
Outlook
Telegram
Qdrant
Redis
PostgreSQL
RabbitMQ
WebSocket
Docker
MCP
Grist
Google Sheets
BigQuery
Playwright
OpenAI
Anthropic
Ollama
Cerebras
Google Gemini
OpenRouter
Slack
Gmail
Outlook
Telegram
Qdrant
Redis
PostgreSQL
RabbitMQ
WebSocket
Docker
MCP
Grist
Google Sheets
BigQuery
Playwright

Heym integrates with OpenAI, Anthropic, Ollama, Cerebras, Google Gemini, OpenRouter, Slack, Gmail, Outlook, Telegram, Qdrant, Redis, PostgreSQL, RabbitMQ, WebSocket, Docker, MCP, Grist, Google Sheets, BigQuery, Playwright.

Purpose-Built for AI Workflows

Not just automation tools with AI added on top. Heym is built from the ground up with AI as the execution model.

AI Assistant

Describe what you want in natural language or voice — the assistant generates nodes and edges and applies them to the canvas instantly. The AI streams its response and any valid workflow JSON is automatically parsed and applied, so you go from idea to working automation in seconds.

  • Natural language workflow generation from plain text or voice
  • Streams response and auto-applies generated nodes to the canvas
  • Works alongside manual editing for hybrid building
  • Filters attached Agent skills down to SKILL.md context when editing complex workflows
  • Supports any configured LLM credential and model

Visual Workflow Editor

Drag-and-drop canvas powered by Vue Flow with a broad library of built-in node types across 7 categories. Build complex workflows without writing code, pin node outputs for isolated testing, and track every change with built-in version history.

  • A broad node library spanning triggers, AI, logic, data, integrations, automation, and utilities
  • Expression DSL for dynamic data transformation between nodes
  • Data Pin lets you freeze a node output and test downstream logic without re-running
  • Extract to Sub-Workflow turns any selection into a reusable workflow
  • Full keyboard shortcuts: run, save, undo, copy, paste, and inline node search

Multi-Agent Orchestration

One orchestrator agent delegates tasks to named sub-agents and sub-workflows, all wired visually on the canvas. Each agent can use Python tools, connect to MCP servers, load skills, and call other workflows — up to 5 levels of nesting depth.

  • Named sub-agents with independent tool calling and reasoning
  • Sub-workflows let agents invoke entire workflows as tools
  • Skills provide portable instruction files and Python tools via drag-and-drop
  • MCP client connections give agents access to external tool servers
  • Configurable reasoning effort and automatic model fallback
  • Automatic context compression keeps long-running agents within model limits

Built-In RAG Pipeline

Upload PDFs, Markdown, CSV, or JSON to managed Qdrant vector stores directly from the dashboard. Wire a RAG node into any workflow to perform semantic search with metadata filters and optional Cohere reranking, then feed the results into LLM or Agent nodes.

  • Qdrant vector store management with one-click document upload
  • Semantic search returns text, relevance score, and metadata per result
  • Optional Cohere reranking for higher-precision retrieval
  • Multi-format document support: PDF, TXT, Markdown, CSV, and JSON
  • Share vector stores with team members for collaborative knowledge bases

Persistent Agent Memory

Enable graph-based memory on any agent node to automatically extract and store entities and relationships from conversations. The knowledge graph persists across workflows and can be viewed, edited, and managed manually in the visual editor.

  • LLM-powered entity extraction with automatic relationship detection
  • Semi-aggressive deduplication merges similar entities automatically
  • Per-agent memory scope with isolated knowledge graphs
  • Visual graph editor for manual node and edge management
  • Async execution ensures workflows never slow down
  • View memory directly from canvas with a single click

LLM Batch API Mode

Switch the LLM node to OpenAI’s Batch API: one array of prompts per run, a main path for merged results, and a dedicated status branch for live progress while the batch job runs—built for high-volume, cost-efficient workflows.

  • Enable batch mode on supported models with your OpenAI API credential and Heym will guide availability in the node panel
  • One run can take an array expression such as $input.items.map("item.text") as the user message
  • Dedicated STATUS output emits pending, processing, completed, and failed style updates for side workflows
  • Final payload includes per-item results, counts, and batch job metadata from OpenAI
  • Suited to lower-cost bulk work, live progress notifications, and status logging branches

Automatic Context Compression

Agent nodes automatically compress conversation history near the model context limit, preserving the system prompt and key turns while summarizing the middle for long-running tasks.

Human-in-the-Loop

Agent nodes can pause at approval checkpoints, generate a public review link, and wait for a reviewer to accept, edit, or refuse before resuming execution.

Guardrails

Block unsafe or unwanted content before it reaches an LLM. Choose categories like violence, hate speech, or harassment and set sensitivity to low, medium, or high per node.

MCP Support

Connect agents to any MCP server to gain external tools, or expose your Heym workflows as an MCP server for Claude Desktop, Cursor, and other clients.

Portal

Turn any workflow into a public chat UI at a custom slug URL. Supports streaming responses, file uploads, image output display, and optional per-user authentication.

Skills System

Portable capability bundles consisting of a SKILL.md instruction file and optional Python tools. Drag and drop a .zip or .md onto any Agent node, or use AI Build to draft and revise skills with live file previews.

Auto Heal

When a Playwright selector breaks at runtime, the AI step automatically finds an alternative selector and retries the action, keeping browser automations resilient to UI changes.

Visual Schedule View

See all active cron workflows on a day, week, or month calendar. Each block shows the workflow name and cron expression on hover — click to jump straight to the canvas.

Parallel Execution

The engine builds a directed acyclic graph and runs independent nodes concurrently with a thread pool. Streaming mode emits node events as each step completes, including webhook SSE consumers using custom configurable endpoints.

Enterprise Security

JWT authentication with HttpOnly cookies and refresh token rotation. Credentials encrypted at rest with AES-256 Fernet. Rate limiting on login, register, and portal endpoints.

Enterprise Integrations

Connect workflows to Telegram, Slack, IMAP, outbound WebSocket, Google Sheets, BigQuery, and more with first-party nodes for data, messaging, and realtime sync.

Error Handling & Retry

Define per-node error handlers and retry policies to recover from transient failures automatically. Workflow-level error handlers catch unhandled exceptions and route them to a dedicated recovery flow.

See Heym in Action

A powerful, intuitive interface designed for building complex AI workflows.

From the dashboard you manage workflows, credentials, vector stores, teams, and analytics. The visual canvas editor lets you connect many built-in node types with a drag-and-drop interface, pin node outputs for debugging, and use the AI Assistant to generate workflows from natural language.

The interface supports both light and dark themes, responsive layouts for desktop and tablet, and keyboard shortcuts for power users. Each node displays its execution status, output preview, and error state directly on the canvas. The properties panel provides inline expression editing with autocomplete for the DSL syntax, model selection dropdowns, and file upload areas for skills and RAG documents.

Heym login page with animated workflow canvas background showing connected AI nodes

Authentication

Log into Heym

The login screen features an animated workflow canvas in the background, previewing the drag-and-drop editor before you even sign in. JWT authentication with HttpOnly cookies ensures secure sessions. Create your account with email and password to access the full dashboard with all 14 management tabs.

What's in the box

Everything an AI workflow needs. Nothing it doesn't.

AI ASSISTANT

Describe it. Heym builds it.

Type or speak your automation in plain English — the Assistant generates the entire canvas: nodes, edges, credentials, expressions. Modify existing flows the same way: "add a HITL checkpoint before Output."

  • Works with any LLM credential you configure
  • Streams responses · auto-applies valid JSON
  • Voice input, transcribed in-browser
Heym AI assistant generating a workflow canvas from natural language description
MULTI-AGENT

Agents that delegate to agents.

One orchestrator, five levels of nested sub-agents and sub-workflows. Each agent has its own model, tools, and Python execution context. Handoffs are visual — you can see the control flow, not infer it from logs.

  • Python tool calling per agent
  • MCP tool servers as drop-in tools
  • Auto context compression at 80% window
Heym multi-agent orchestration with nested sub-agents and visual control flow
RAG · NATIVE

Vector stores, not another tool tab.

Upload PDFs, Markdown, CSVs directly. Managed Qdrant vector stores live alongside your workflows. The Qdrant RAG node wires retrieval into any flow with metadata filters and optional Cohere reranking.

  • One-click upload · chunking & embeddings handled
  • Hybrid search (dense + BM25 + rerank)
  • Team-shared knowledge bases
Heym built-in RAG pipeline with Qdrant vector store and document upload
MCP · FIRST-CLASS

Your workflows become AI tools.

Every workflow is exposable over MCP. Paste the SSE URL into Claude Desktop, Cursor, or any MCP client. Your AI can now run your automations from its own interface. Agents can also consume any external MCP server as tools.

  • Bidirectional: MCP server + client
  • Per-workflow API keys
  • No SDK, no adapter code
Heym MCP server exposing workflows as tools for Claude Desktop and Cursor
HUMAN-IN-THE-LOOP

Approval checkpoints as a primitive.

Drop a HITL node and execution pauses, generates a public review link, sends it via your configured channel. A reviewer can accept, edit, or refuse. AI output never ships without someone accountable signing off.

  • Public shareable review link
  • Accept · edit · refuse · timeout
  • Reviewer history in the trace
Heym human-in-the-loop node with public review link and approval workflow
OBSERVABILITY

851 traces on average per real alpha user.

Every LLM call, every agent step, every token and millisecond — captured automatically. Filter by source, credential, model. The trace viewer shows the full prompt, response, tool calls, and timing waterfall. Replay any run against a different model.

  • Span timeline per node execution
  • Data Pin · freeze outputs for testing
  • Built-in evals with LLM-as-Judge
Heym observability dashboard showing LLM trace timeline with token usage and timing

Everything in One Dashboard

Heym's dashboard gives you a unified control center for workflows, credentials, vector stores, teams, analytics, evaluations, and more — all accessible from a single navigation bar with 15 dedicated tabs.

Each tab is purpose-built for a specific aspect of workflow management. From organizing automations in folders to inspecting LLM traces, from managing team permissions to running evaluation suites against multiple models — the dashboard eliminates the need for external tools and keeps your entire AI automation stack in one place.

The Workflows tab supports card and list views with nested folder organization, drag-and-drop JSON import, and one-click export for backup and sharing. Credentials are encrypted at rest with AES-256 Fernet and can be shared with specific users or teams. The Analytics tab tracks execution counts, success rates, and average duration with trend analysis across 7, 30, and 90 day windows. Traces provide full request and response payloads with timing breakdowns for every LLM and tool call in your workflows.

Workflows

Create, organize, and manage your automation workflows in a card grid or list view with folder and sub-folder support. Import workflows by dragging JSON files, or export them for backup and sharing between instances.

Templates

Save and reuse entire workflows or individual node configurations as templates. Share templates with everyone or specific users and teams for consistent automation patterns across your organization.

Variables

Manage persistent global variables that survive across workflow executions. Store counters, configuration values, accumulated lists, and shared state accessible from any workflow via $global.variableName expressions.

Chat

A direct LLM chat interface for testing models, prototyping prompts, and asking questions without building a full workflow. Supports streaming, markdown rendering, inline images, and voice input.

Credentials

Securely manage API keys and secrets used by workflow nodes. All values are encrypted at rest with AES-256 Fernet and masked in the UI. Share credentials with individual users or entire teams.

Vector Stores

Create and manage Qdrant vector stores for RAG pipelines. Upload documents in multiple formats, manage content and sources, and share stores with team members for collaborative knowledge bases.

MCP

Configure Model Context Protocol integration to connect agents to external tool servers. Each workflow with an Agent node can expose its tools via MCP for Claude Desktop, Cursor, and other compatible clients.

Traces

Inspect LLM execution traces with full request and response payloads, timing breakdowns for LLM and tool calls, and skills included in each invocation. Essential for debugging agent behavior and optimizing prompts.

Analytics

View execution metrics including total runs, success rates, and average duration with period-over-period trend analysis. Filter by time range or specific workflow and track performance over 7, 30, or 90 day windows.

Evals

Create evaluation suites with test cases to systematically test AI workflows. Run evaluations against multiple models simultaneously, compare pass/fail rates, and use LLM-as-Judge scoring for automated quality assessment.

Teams

Create teams and add members by email to share workflows, credentials, variables, templates, and vector stores with a group at once. All team members automatically gain access to shared resources.

DataTables

Create structured data tables with typed columns including string, number, boolean, date, and JSON. Manage rows with inline editing, import and export CSV files, and share tables with read or write permissions.

Drive

Browse all files generated by skills across your workflows. Search, download, share, or delete files with full metadata including source node, creation date, file type, and size.

Scheduled

See all active cron workflows on a day, week, or month calendar. Each block shows the workflow name and cron expression — hover to preview the schedule, click to jump straight to the canvas.

Logs

View Docker container logs for the entire Heym stack including backend, frontend, and PostgreSQL. Filter by container and log level for efficient debugging and infrastructure troubleshooting.

Built for Every Team

From customer support to DevOps, Heym adapts to your workflow needs with AI-powered automation.

Each use case below shows a real workflow pattern you can build on the Heym canvas. Combine trigger nodes for scheduling or webhooks, AI nodes for language understanding and generation, logic nodes for branching and looping, and integration nodes for connecting to external services. Every workflow runs as a parallel DAG with automatic dependency resolution, so steps that can execute concurrently do so without extra configuration.

Automate Customer Support with AI Agents

Build intelligent support workflows that understand context, search knowledge bases, and resolve issues autonomously. An Agent node retrieves answers from your RAG-powered documentation, applies guardrails for safe responses, and escalates to a human reviewer through the built-in approval checkpoint when confidence is low.

  • RAG-powered knowledge base with Qdrant vector stores for instant semantic answers
  • Human-in-the-loop escalation generates a public review link for complex cases
  • Multi-language support with configurable guardrails to block unsafe content
  • IMAP inbox polling plus Slack and email delivery for omnichannel support triage
  • Portal mode turns the workflow into a public chat UI for end-user self-service
Try This Workflow

Customer Support Workflow

7 nodes · Powered by Heym

IMAP Trigger
Qdrant RAG
AI Agent
Guardrails
HITL Review
Slack Notify
Send Email

Many Powerful Node Types

Connect anything to anything. Build complex workflows by dragging and dropping nodes onto the canvas.

Every node type in the editor is listed below — from triggers (HTTP, Telegram, Slack, IMAP, outbound WebSocket, cron, RabbitMQ, errors) to integrations such as Qdrant, RabbitMQ, WebSocket send, Google Sheets, BigQuery, Grist, Redis, and SMTP. Wire them with expressions like $nodeLabel.field. Independent branches run in parallel automatically.

Input

Zero-input entry for HTTP and webhook runs: define typed input fields, read body and headers, and return workflow output to callers.

Telegram Trigger

Start from Telegram bot webhook updates — read incoming text, callback queries, chat IDs, and sanitized headers for downstream expressions.

Slack Trigger

Start from Slack Events API requests — URL verification, signing-secret validation, and the full event payload for expressions.

IMAP Trigger

Poll a mailbox on a per-node interval, parse each new email, and expose subject, bodies, headers, addresses, and attachment metadata.

WebSocket Trigger

Maintain an outbound client connection to an external WebSocket server and start runs on message, connected, or closed events.

Cron

Run on a schedule with standard cron expressions — no separate job runner required.

RabbitMQ

Publish to exchanges and queues or receive from a queue — one node with send/receive operations for event-driven flows.

Error Handler

Runs when another node fails (no incoming edge). Use error context for Slack alerts, logging, or recovery branches.

LLM

Process text with language models (OpenAI, Ollama, vLLM, Cohere, …): text and vision, optional JSON schema, and content guardrails.

AI Agent

LLM with Python tools, MCP clients, skills, sub-agents, human-in-the-loop review, and optional persistent memory.

Qdrant RAG

Insert documents into or query a Qdrant vector store for retrieval-augmented generation with filters and optional reranking.

Condition

Branch on a boolean expression — separate true and false outputs.

Why Heym?

See how Heym compares to other automation platforms. Built AI-first, not AI-added.

FeatureHeymn8nZapierMake
Built-in LLM NodeSend prompts to language models for text generation, vision, image creation, and structured JSON output.
LLM Batch API + Status BranchesSend an array of prompts through the OpenAI Batch API, with a dedicated status branch for live progress from pending through completed—alongside your main result path on the canvas.
Built-in Agent Node (Tool Calling)LLM-powered agents that can call Python tools, execute code, and interact with external services autonomously.
Multi-Agent OrchestrationOne orchestrator agent delegates tasks to named sub-agents and sub-workflows with up to 5 levels of nesting.
Built-in RAG / Vector StoreUpload documents to managed Qdrant vector stores and perform semantic search with metadata filters and reranking.
WebSocket Read / WriteNative inbound and outbound WebSocket workflow steps for listening to external streams and pushing realtime payloads.
Natural Language Workflow BuilderDescribe what you want in plain text or voice and the AI assistant generates the entire workflow on the canvas.
MCP (Model Context Protocol)Connect agents to MCP tool servers or expose your workflows as an MCP server for Claude Desktop and Cursor.
Skills System for AgentsPortable capability bundles with SKILL.md instructions and optional Python tools that extend agent behavior via drag-and-drop.
LLM Trace InspectionView full request and response payloads, timing breakdowns, tool calls, and skills included for every LLM invocation.
Built-in Evals for AI WorkflowsCreate evaluation suites with test cases, run them against multiple models, and compare pass/fail rates with LLM-as-Judge scoring.
Human-in-the-Loop (HITL)Agents pause at approval checkpoints, generate a public review link, and wait for a reviewer to accept, edit, or refuse.
LLM GuardrailsBlock unsafe content categories like violence, hate speech, or harassment with configurable sensitivity levels per node.
Automatic Context CompressionAgent conversations automatically compress when reaching context limits to prevent timeouts and enable long-running tasks.
Parallel DAG ExecutionThe engine builds a directed acyclic graph and runs independent nodes concurrently with a thread pool for maximum throughput.
Self-Hostable, Source AvailableDeploy on your own infrastructure with Docker Compose or Kubernetes. Your data never leaves your servers.
Expression DSL for Dynamic DataReference upstream node outputs with expressions like $nodeLabel.field, with support for arithmetic, string helpers, and array operations.
Native, first-party support
Partial or plan-dependent
No documented native support

Comparison reflects publicly documented, native product capabilities reviewed on April 21, 2026. Availability may vary by plan, deployment model, or third-party integrations. “Partial” indicates limited, plan-restricted, or indirect implementation — hover for details.

Heym was designed AI-first from day one — multi-agent orchestration, built-in RAG, a portable skills system, and parallel DAG execution are core primitives, not add-ons. Combined with full self-hosting, LLM trace inspection, and a natural-language workflow builder, Heym gives teams complete control over their AI automation stack without vendor lock-in.

Built with Modern Tech

Every component is chosen for performance, developer experience, and reliability.

Python
TypeScript
Vue.js
FastAPI

Frontend

The application frontend is built with Vue.js 3 and TypeScript for type safety, with Vue Flow powering the visual canvas editor. Vite and Bun provide fast builds, while Tailwind CSS and Shadcn Vue handle styling.

Vue.js 3
Framework
TypeScript
Language
Vite + Bun
Build
Vue Flow
Canvas
Pinia
State
Tailwind CSS
Styling
Shadcn Vue
UI

Backend

A Python backend powered by FastAPI delivers async performance for concurrent workflow executions. UV manages packages and Alembic handles database schema migrations.

Python 3.11+
Language
FastAPI
Framework
UV
Package Mgr
Alembic
Migrations

Database & Infra

PostgreSQL stores workflows and execution history, Redis handles caching and rate limiting, RabbitMQ provides message-driven triggers, and Qdrant powers vector search for RAG pipelines.

PostgreSQL 16
Database
SQLAlchemy 2.0
ORM
Redis
Cache
RabbitMQ
Queue
Docker
Container
QDrant
Vector DB

Security & Auth

JWT tokens in HttpOnly cookies with refresh token rotation secure user sessions. Passwords use bcrypt hashing, credentials are encrypted at rest with AES-256 Fernet, and Pydantic v2 validates all inputs.

JWT
Auth
bcrypt
Passwords
Pydantic v2
Validation
Fernet
Encryption

AI & LLM

Connect to OpenAI, Ollama for local models, vLLM for high-throughput inference, or Cohere for embeddings and reranking. MCP support lets agents connect to external tool servers or expose workflows as tools.

OpenAI
LLM
Ollama
Local
vLLM
Inference
MCP
Protocol
Cohere
LLM

Developer Experience

First-party documentation lives inside Heym—structured guides for every node, tab, and reference topic. Developers and AI agents get a native chat-with-docs experience to explore and apply the docs without leaving the product. The codebase is kept healthy with ESLint and Ruff lint checks and pytest-backed unit tests.

In-app documentation
Native
Chat with docs
Built-in
Agent-friendly docs
Agents
Source Available

Join Our Growing Community

Your data, your infrastructure, your code. Self-host it anywhere, contribute to the project, and be part of something bigger.

We believe AI workflow automation should be transparent, auditable, and under your control. With full source access you can review the security model, extend the platform with custom nodes, and deploy in air-gapped environments where data sovereignty is non-negotiable.

Getting started locally still takes three commands: clone the repository, copy the example environment file, and run the setup script. The script starts PostgreSQL, runs database migrations, and launches both the FastAPI backend and the Vue.js frontend. If you prefer a prebuilt install, you can run the published container image instead.

Free to Use

No licensing fees, no usage limits. Self-host Heym on your own infrastructure with Docker Compose, a prebuilt container image, or Kubernetes.

Source Available

Complete transparency over every line of code. Audit the security model, customize features, and contribute improvements back to the project.

Community Driven

Built together with contributors who share workflows, report bugs, suggest features, and shape the product roadmap through GitHub and Discord.

Commons Clause

Licensed under Commons Clause plus MIT — free to use, modify, and distribute, but not to resell as a paid service. Keeps Heym transparent and accessible.

$ git clone https://github.com/heymrun/heym.git
$ cd heym && cp .env.example .env
$ ./run.sh
✓ Ready at http://localhost:4017
or
$ docker run --env-file .env -p 4017:4017 ghcr.io/heymrun/heym:latest
Star on GitHub

Licensed under Commons Clause + MIT

Enterprise Solutions

Power Your Enterprise with Heym

Commercial use, enterprise licensing, and professional support for organizations of all sizes.

While Heym is source-available and free to self-host under Commons Clause + MIT, enterprise organizations often need dedicated support, commercial licensing, custom integrations, and guaranteed SLAs. Our enterprise offering provides all of this along with priority access to new features, hands-on onboarding for your team, and direct communication channels with our engineering team for rapid issue resolution.

Dedicated Support

Priority support with direct access to our engineering team, guaranteed SLA response times, and a dedicated account manager for your organization.

On-Premise Deploy

Deploy Heym on your own infrastructure with Docker Compose or Kubernetes. Your data never leaves your servers, ensuring full regulatory compliance and data sovereignty.

Custom Development

Tailor Heym to your organization with custom node types, integrations with internal systems, and workflow templates designed for your specific use cases.

Unlimited Scale

No artificial limits on workflows, executions, team members, or vector stores. Scale your AI automation to match your enterprise workload requirements.

What We Offer

Enterprise licensing gives your organization commercial rights, professional support, and the flexibility to customize Heym for your specific automation needs.

  • Enterprise licensing with full commercial rights for internal and customer-facing deployments
  • Dedicated account manager with guaranteed SLA response times for priority issues
  • Custom integrations and connectors built to connect with your internal APIs and data systems
  • Training and onboarding sessions for your team to accelerate adoption and workflow creation
  • Early access to beta features and influence on the product roadmap through direct feedback channels

Get in Touch

Ready to explore Heym Enterprise? Our team is here to discuss your specific needs.

Response time: Usually within 24 hours

Frequently Asked Questions

Answers to your common questions about Heym.

Heym is built from the ground up around AI. Other platforms now cover parts of the same surface area, but Heym keeps multi-agent orchestration, human-in-the-loop checkpoints, node-level guardrails, built-in RAG, MCP support, and a portable skills system in one runtime designed for AI workflows first.

Yes. Heym is source-available and free to self-host under the Commons Clause + MIT licensing model. You can deploy it on your own infrastructure without licensing fees. Commercial licensing and enterprise support are available for organizations that need those additional rights and services.

Absolutely. Heym is designed for self-hosting. You can deploy it using Docker Compose, Kubernetes, or any container orchestration platform. Your data stays in your infrastructure, giving you complete control and data sovereignty.

Heym supports OpenAI (GPT-4, GPT-3.5), Ollama (for local LLMs), vLLM, Cohere, and any OpenAI-compatible API. You can also configure custom endpoints for your own models.

The AI Assistant lets you describe what you want in natural language or use voice input. It analyzes your request, generates appropriate nodes and edges, and applies them directly to the canvas. No other automation platform ships a natural-language workflow builder that works directly inside the editor.

MCP (Model Context Protocol) is a standardized protocol for connecting AI assistants to tools and data. Heym supports MCP in two ways: as a client (Agent nodes can connect to any MCP server and gain all its tools), and as a server (your Heym workflows can be exposed as an MCP server for Claude Desktop, Cursor, and other clients).

Heym has built-in vector store management with QDrant. You can upload PDFs, Markdown, CSV, JSON, and other document types. The RAG node performs semantic search across your documents and returns relevant context that flows directly into your LLM or Agent nodes.

Skills are portable capability bundles. Each skill consists of a SKILL.md instruction file plus optional Python tools. You can drag and drop a .zip or .md file onto an Agent node to extend its context and toolbox. Skills enable code reuse and sharing across workflows and teams.

We love contributions! You can contribute by reporting bugs, suggesting features, submitting pull requests, improving documentation, or sharing your workflows with the community. Join our Discord to connect with other contributors and get started.

Yes! Enterprise licensing includes professional support with SLA guarantees. Contact us at [email protected] for information about commercial use, custom development, and priority support.

The Commons Clause is a license condition that restricts selling the software or offering it as a paid service. You are free to use, modify, and distribute the software, but you may not resell it. This keeps Heym transparent and accessible while preventing commercial redistribution.

Getting started is easy. Clone the repository, copy the example environment file, and run the setup script. The script starts PostgreSQL, runs database migrations, and starts both the frontend and backend servers. Open your browser at the configured port (default: 4017) and you're ready to build.

Still have questions?

Ask on Discord
92 Documentation Pages
Click any card to view →

Comprehensive Documentation

Every node, dashboard tab, and platform feature is documented with detailed guides, configuration references, and real-world examples. The documentation is built into the Heym application and available on GitHub.

From getting started tutorials that walk you through your first workflow to advanced reference material on agent orchestration, expression DSL syntax, parallel execution, and security architecture — Heym provides the documentation you need to build production-grade AI automations. The library covers dozens of node guides, 15 dashboard tabs, and over 30 reference topics.

Node Reference

38 Node Types

Each node type has a dedicated documentation page covering configuration options, input and output schemas, expression examples, and common use cases. Nodes span seven categories: triggers, AI, logic, data, integrations, automation, and utilities. Node types overview

Input (Webhook) Node

Entry point for workflows triggered by HTTP requests. Supports custom input fields, request metadata, and both synchronous and asynchronous execution modes.

Cron Node

Schedule-based trigger using standard five-field cron expressions. Configure hourly, daily, weekly, or custom intervals for automated recurring workflow execution.

Telegram Trigger Node

Receive Telegram bot webhook updates and start workflows instantly. Exposes the full update payload, the primary message object, callback queries, sanitized headers, and the chat ID needed for replies.

IMAP Trigger Node

Poll an IMAP inbox on a configurable minute interval and start workflows for newly arrived email. Exposes subject, sender, text and HTML bodies, decoded headers, and attachment metadata.

WebSocket Trigger Node

Open an outbound client connection to an external WebSocket server and trigger workflows on message, connected, or closed events. Exposes parsed message frames plus connection and close metadata.

RabbitMQ Node

Event-driven trigger that starts workflows when messages arrive in a RabbitMQ queue or exchange. Also supports publishing messages for asynchronous processing.

Slack Trigger Node

Receive Slack Events API webhooks and trigger workflows automatically. Auto-generates a static webhook URL, handles URL verification challenge, and verifies request signatures via a signing secret credential.

Error Handler Node

Automatic error recovery node that runs when any other node fails. Provides error message, failed node label, and timestamp for notifications or retry logic.

LLM Node

Text generation with OpenAI, Ollama, vLLM, or Cohere models. Supports text completion, vision for image analysis, image generation, structured JSON output mode, and configurable temperature.

AI Agent Node

Autonomous agent with Python tools, MCP server connections, skill attachments, sub-agent orchestration, and human-in-the-loop approval checkpoints for supervised automation.

Qdrant RAG Node

Retrieval-augmented generation with Qdrant vector stores. Insert documents, search by semantic similarity with metadata filters, and optionally rerank results with Cohere.

Condition Node

If/else branching based on expression evaluation. Routes execution to truthy or falsy output handles for conditional workflow logic.

Switch Node

Multi-path routing by matching a value against defined cases. Includes a default handle for unmatched values, enabling complex decision trees.

Loop Node

Iterate over arrays executing downstream nodes per item. Provides item, index, total, isFirst, and isLast context variables for each iteration.

Merge Node

Wait for multiple parallel branches to complete and combine their outputs into a single object before continuing downstream execution.

Set Node

Transform and map input data using key-value expression pairs. Supports string manipulation, arithmetic, array operations, and object restructuring.

Variable Node

Read and write workflow-local or persistent global variables that survive across executions. Shared variables accessible via $global.variableName expressions.

DataTable Node

CRUD operations on Heym DataTables with typed columns. Read, write, update, and delete rows in structured first-party storage without external credentials.

Execute Node

Call another workflow as a sub-workflow, passing input via expressions and receiving the output. Enables modular, reusable workflow compositions.

HTTP Node

Make HTTP requests with configurable method, headers, body, and authentication. Parse JSON responses and access status codes for API integration workflows.

WebSocket Send Node

Connect to an external WebSocket, send one text, JSON, or binary message, and continue with send status, payload size, and negotiated subprotocol metadata.

Telegram Node

Send Telegram bot messages with dynamic chat IDs and expression-built message bodies. Ideal for bot replies, operator alerts, and chat-first automation flows.

Slack Node

Send messages to Slack channels via Incoming Webhooks for real-time notifications, alerts, error reporting, and team communication from workflows.

Send Email Node

Send emails through SMTP with dynamic recipients, subject lines, and body content using expressions for transactional and alert automation.

Redis Node

Key-value operations including set, get, hasKey, and deleteKey with optional TTL for caching, rate limiting, and shared state between workflows.

Grist Node

Spreadsheet automation with Grist. Read, write, and manage records, tables, and columns for data-driven workflows backed by collaborative spreadsheets.

Google Sheets Node

Read ranges, append rows, update cells, clear ranges, and inspect sheet info in Google Sheets via OAuth2. Bring your own Google Cloud app — tokens refresh automatically.

BigQuery Node

Run SQL queries against Google BigQuery datasets and insert rows via the streaming insertAll API. Authenticated via OAuth2 with automatic token refresh — bring your own Google Cloud app.

Drive Node

Manage skill-generated files from the workflow: delete files or update share constraints (password, expiry TTL, max downloads) using expressions and file IDs from agent output.

Crawler Node

Web scraping with FlareSolverr proxy support. Extract content using CSS selectors, configure wait times, and retrieve raw HTML or targeted text output.

Playwright Node

Full browser automation with configurable steps: navigate, click, type, screenshot, scroll, and AI-powered auto-heal when selectors change. Includes network capture and cookie management.

Wait Node

Pause workflow execution for a specified duration in milliseconds. Useful for API rate limiting, polling intervals, and timed delay between steps.

Output Node

Return the workflow response to the caller. Supports async downstream mode where subsequent nodes continue running in the background after the response is sent.

JSON output mapper

Build a JSON object from key-value mappings like Set. When it is the only terminal node, webhook and run responses return that object as the top-level body without label or result wrapping.

Console Log Node

Log expression values to the backend console for debugging during development. Outputs appear in Docker container logs without affecting the workflow.

Throw Error Node

Stop execution immediately with a custom HTTP status code and error message. Used for input validation, access control, and conditional error handling.

Disable Node

Permanently disable another node by label at runtime. Useful for one-time operations like stopping a Cron trigger after a condition is met.

Sticky Note Node

Add markdown documentation to the canvas for team communication, workflow instructions, and implementation notes without affecting execution flow.

Dashboard Tabs

15 Tabs

The Heym dashboard organizes all platform management into 15 dedicated tabs accessible from a single navigation bar. Each tab has its own documentation page explaining features, permissions, and usage patterns.

Workflows Tab

Central hub for creating, organizing, and managing workflows. Supports card and list views, folder and sub-folder organization, drag-and-drop JSON import, and one-click export.

Templates Tab

Save and reuse entire workflows or individual node configurations as templates. Share with everyone or specific users and teams for consistent automation patterns.

Variables Tab

Manage persistent global variables that survive across workflow executions. Store counters, configuration values, and shared state accessible via $global.variableName.

Chat Tab

Direct LLM chat interface for testing models, prototyping prompts, and asking questions without building a full workflow. Supports streaming, markdown, images, and voice input.

Credentials Tab

Securely manage API keys and secrets used by nodes. Values are encrypted at rest with AES-256 Fernet, masked in the UI, and shareable with individual users or teams.

Vector Stores Tab

Create and manage Qdrant vector stores for RAG pipelines. Upload documents in PDF, Markdown, CSV, JSON, and other formats, and share stores with team members.

MCP Tab

Configure Model Context Protocol integration to connect agents to external tool servers. Expose workflow tools via MCP for Claude Desktop, Cursor, and other compatible clients.

Traces Tab

Inspect LLM execution traces with full request and response payloads, timing breakdowns, tool call details, and skills included in each invocation for debugging.

Analytics Tab

Execution metrics including total runs, success rates, average duration, and trend analysis. Filter by time range or workflow, and track performance across 7, 30, or 90 day windows.

Evals Tab

Create evaluation suites with test cases for systematic AI workflow testing. Run against multiple models, compare pass/fail rates, and use LLM-as-Judge scoring.

Teams Tab

Create teams and add members by email to share workflows, credentials, variables, templates, and vector stores. All members automatically gain access to shared resources.

DataTables Tab

Create structured data tables with typed columns including string, number, boolean, date, and JSON. Inline editing, CSV import/export, and read or write sharing permissions.

Drive Tab

Browse all files generated by skills across your workflows. Search, download, share, or delete files with metadata including source node, creation date, type, and size.

Scheduled Tab

See all active cron workflows on a day, week, or month calendar. Each block shows the workflow name and cron expression — hover to preview, click to jump to the canvas.

Logs Tab

View Docker container logs for the entire Heym stack including backend, frontend, and PostgreSQL. Filter by container and log level for infrastructure troubleshooting.

Reference

30+ Topics

Advanced reference documentation covering the AI assistant, agent architecture, canvas features, guardrails, parallel execution engine, expression DSL, credential sharing, portal chat UI, skills system, keyboard shortcuts, and security hardening.

AI Assistant

Natural language workflow builder inside the canvas editor. Describe what you want and the AI generates nodes and edges automatically. Supports voice input and streaming responses.

Agent Architecture

Deep dive into agent orchestration patterns including lead/sub-agent delegation, tool calling, MCP integration, skills system, and configurable execution limits.

Canvas Features

Visual editor capabilities including drag-and-drop node placement, edge routing, zoom controls, minimap, snap-to-grid, node pinning, and output inspection panels.

Guardrails

Content safety filtering for LLM and Agent nodes. Blocks violence, hate speech, sexual content, prompt injection, and other unsafe categories with configurable error routing.

Parallel Execution

DAG-based scheduling that automatically runs independent nodes concurrently in a thread pool. Downstream nodes start as soon as dependencies complete.

Human-in-the-Loop

Approval checkpoints that pause workflow execution and generate a public review link. A human reviewer can approve or reject the pending action before it continues.

Expression DSL

Reference upstream data with $nodeLabel.field syntax. Supports string interpolation, ternary operators, array methods, object access, and built-in helper functions.

Credentials and Sharing

Encrypted credential storage with AES-256 Fernet. Share API keys with individual users or entire teams while keeping secret values masked and secure.

Portal Chat UI

Turn any workflow with an Input and Output node into a public chat interface. Embed on websites or share a link for end-user self-service powered by your workflows.

Drive & Generated Files

Skill-generated files in the Drive tab: search, download, share links with optional password and expiry. Use the Drive node in workflows to delete files or update access constraints programmatically.

Skills System

Portable capability bundles consisting of SKILL.md instructions and optional Python tools. Drag and drop onto Agent nodes to extend context and toolbox.

Keyboard Shortcuts

Power user shortcuts for node selection, deletion, duplication, undo/redo, zoom, panning, and canvas navigation to speed up workflow creation.

Edit History

Track all changes to workflows with timestamps and user attribution. Restore previous versions and compare differences between workflow revisions.

Webhooks and Triggers

Configure JSON and SSE webhook endpoints, Telegram and Slack bot webhooks, cron schedules, IMAP mailbox polling, outbound WebSocket client triggers, and RabbitMQ consumers for event-driven automation.

SSE Streaming

Stream workflow execution over stream with execution_started, node_start, node_complete, and execution_complete events. The editor can generate ready-to-run cURL commands and per-node start messages.

User Settings

Personal configuration including theme, default LLM provider, API key management, notification preferences, and display options for the canvas editor.

Security

Security architecture overview covering JWT authentication, HttpOnly cookies, credential encryption, role-based access control, and deployment hardening recommendations.

Workflow Organization

Organize workflows with folders, sub-folders, tags, and pinned favorites. Search workflows with the command palette and filter by status or recent activity.

Join Our Community

Connect with developers, share workflows, and shape the future of AI automation.

Heym is built in the open by a growing community of contributors and users. Whether you want to report a bug, suggest a new node type, share a workflow template, or contribute code, there are multiple ways to get involved. Join the Discord for real-time discussions, browse the GitHub repository for source code and issues, or dive into the documentation to learn every feature in depth.

Discord Community

Connect with developers building AI automations, share your workflows and templates, and get real-time help from the Heym team and community members. Ask questions about node configuration, agent orchestration, or deployment, and see how others are using Heym in production.

Join Discord

GitHub Repository

Star the repo to stay updated, report bugs with reproduction steps, request features through GitHub Issues, and contribute pull requests to the source-available project. Browse the full source code for the backend, frontend, and documentation.

View on GitHub

Documentation

Comprehensive guides covering every node type, the expression DSL, agent orchestration patterns, RAG pipeline setup, MCP configuration, and deployment options. Includes getting started tutorials, API reference, and advanced workflow examples.

Read Docs

Resources

Stay Updated

Get the latest news, feature releases, and AI automation tips delivered to your inbox.

Subscribe to the Heym newsletter for monthly updates on new node types, agent orchestration improvements, community workflow highlights, and upcoming features. We respect your inbox and only send content that helps you build better AI automations.

Stop gluing AI tools together.
Start shipping workflows.

One runtime. Your infrastructure. Every feature in the open.

Docker Compose · K8s Helm · MIT + Commons Clause

Heym is a source-available AI-native workflow automation platform that lets you build, visualize, and run intelligent pipelines without writing code. Using a drag-and-drop canvas, you connect a broad library of node types across seven categories — triggers, AI, logic, data, integrations, automation, and utilities — into production-grade workflows that run on your own infrastructure.

Every workflow is a directed acyclic graph (DAG) that Heym compiles, validates, and executes in parallel where dependencies allow, giving you maximum throughput without manual parallelism boilerplate.

The AI Assistant accepts natural language or voice input and generates an entire workflow — nodes, edges, and configuration — applying it to the canvas instantly. You can describe a support ticket triage pipeline, a document intelligence workflow, or a multi-agent research system in plain English and have a working prototype in seconds.

The assistant streams its response and automatically parses and applies any valid workflow JSON. Voice input is processed in the browser, transcribed locally, and sent to the backend alongside your existing canvas state so the assistant can modify, extend, or refactor any part of the current workflow.

Heym supports multi-agent orchestration with up to five levels of nesting. An orchestrator agent can delegate tasks to named sub-agents and sub-workflows, passing context and receiving results.

Each agent supports Python tool calling, MCP server connections, portable skills bundles, human-in-the-loop approval checkpoints that generate a public review link, and automatic context compression for long-running tasks. Content guardrails sit between the LLM output and downstream nodes, rejecting non-compliant responses before they propagate through the rest of the workflow.

Built-in RAG pipelines connect to managed Qdrant vector stores. Upload documents, configure chunking and embedding, and enable semantic search with metadata filters and optional Cohere reranking — all from the canvas without custom code.

Execution traces record the full request and response payload, token usage, tool calls, and timing for every LLM invocation, making production debugging straightforward. The trace viewer shows a waterfall timeline of every node execution, color-coded by status. You can replay any historical execution against a different model or configuration directly from the trace view.

Heym compares favorably to n8n, Zapier, and Make.com as an AI-first alternative. Unlike general automation tools that added AI as a plugin, Heym was designed from the ground up for LLM orchestration, multi-agent coordination, and RAG pipelines.

Heym is self-hostable with Docker Compose or Kubernetes, keeping all data within your infrastructure perimeter — a requirement for teams processing PII, financial records, or proprietary code. The source code is available on GitHub under the Commons Clause and MIT license. Enterprise customers receive a commercial license with SLA-backed support and a dedicated Slack channel.

Trigger nodes include Webhook, Telegram, Cron, IMAP, outbound WebSocket Trigger, RabbitMQ, and Error Handler. Webhook triggers support path prefixes, authentication tokens, and response templates so callers receive a structured reply while the workflow continues asynchronously.

Cron triggers support both simple intervals and full POSIX cron expressions with timezone support. IMAP triggers poll a mailbox on a configurable minute interval. WebSocket Trigger nodes connect Heym to an external socket server and can fire on message, connected, or closed events. The node configuration panel keeps all trigger settings editable on-canvas.

The AI node category provides LLM, AI Agent, and Qdrant RAG nodes. The LLM node supports OpenAI GPT-4o, GPT-4, GPT-3.5 Turbo, Ollama local models, vLLM, Cohere Command, and any OpenAI-compatible API endpoint. You configure system prompt, temperature, max tokens, and response format per node. For large prompt lists, optional batch mode uses OpenAI’s Batch API for efficient bulk runs with a dedicated status branch for progress and side effects.

The AI Agent node wraps an LLM with a tool-calling loop, connecting to Python tools, HTTP tools, or any MCP server. Agent nodes support streaming output, context compression, and per-invocation guardrails. The Qdrant RAG node performs dense, sparse BM25, or hybrid search, returning ranked chunks with metadata and scores.

Logic nodes include Condition, Switch, Loop, and Merge. The Condition node evaluates a boolean expression and routes execution to one of two branches. Switch routes to one of up to sixteen named branches based on a value match or regex pattern.

The Loop node iterates over an array, executing its child sub-graph once per item and collecting results. The Merge node waits for multiple upstream branches and combines their outputs using concatenation, deep merge, or a custom merge function written in the expression DSL.

Data nodes include Set, Variable, DataTable, and Execute. The Set node assigns named variables from the expression DSL, prior node outputs, or hardcoded literals. The Variable node provides a workflow-scoped mutable store that persists across loop iterations and sub-agent calls.

The DataTable node provides a structured relational table within the workflow, supporting append, query, update, and delete without an external database. The Execute node runs shell commands or Python scripts in an isolated subprocess, capturing stdout, stderr, and exit code as outputs.

Integration nodes include HTTP, WebSocket Send, Telegram, Slack, Send Email, Redis, Grist, Google Sheets, and BigQuery. The HTTP node supports GET, POST, PUT, PATCH, DELETE, and HEAD with configurable headers, authentication (Bearer, Basic, API Key), and response parsing (JSON, text, binary).

The WebSocket Send node opens an outbound client connection to an external socket, sends one text, JSON, or binary message, and returns delivery metadata. The Slack node sends messages, updates existing messages, and posts file uploads via a bot token. Send Email connects to SMTP with TLS support and renders HTML or plain-text templates. Redis supports GET, SET, DEL, LPUSH, RPOP, PUBLISH, and arbitrary command passthrough.

Google Sheets reads ranges, appends and updates rows, and inspects sheet metadata via OAuth2. The BigQuery node runs SQL queries and inserts rows via the streaming insertAll API with automatic OAuth2 token refresh. Grist reads and writes rows in Grist data documents using the Grist API.

Automation nodes include Crawler and Playwright. The Crawler node fetches URLs using a configurable HTTP client, extracts content with CSS selectors or XPath, and returns structured data arrays.

The Playwright node launches a Chromium browser, navigates to a URL, and executes interaction steps such as click, fill, select, and screenshot. It includes an AI auto-heal feature that regenerates broken selectors using a vision model when a step fails due to a DOM change.

Utility nodes include Output, Wait, Console Log, Throw Error, Disable Node, and Sticky Note. The Output node marks the terminal result of a workflow execution. The Wait node pauses execution for a configurable duration or until a webhook callback arrives, enabling human-in-the-loop patterns.

Console Log emits messages to the execution trace for debugging. Throw Error terminates the current branch with a typed error that upstream Error Handler nodes can catch. Disable Node temporarily removes a node from execution without deleting it. Sticky Note attaches a text annotation to the canvas for documentation.

MCP integration works in both directions. As an MCP client, Agent nodes connect to any external MCP server and gain access to all tools it exposes. Heym automatically discovers available tools on connection and presents them in the agent configuration panel.

As an MCP server, Heym exposes all published workflows as callable tools over the MCP protocol. Claude Desktop, Cursor, VS Code extensions, and other MCP-compatible clients can invoke your workflows directly from their native interfaces.

The skills system enables portable, reusable capability bundles for Agent nodes. A skill is a zip archive or Markdown file containing a SKILL.md instruction document and optional Python tool files. Dragging a skill onto an Agent node extends that agent's system context and toolbox without modifying the node configuration.

Skills can define custom tool schemas, persistent memory structures, and multi-step reasoning templates. Agent nodes include an AI Skill Builder modal for drafting and revising skills with live previews. The same skill can be shared across multiple agents and workflows for consistent behavior across an organization.

The Portal Chat UI turns any workflow into a standalone chat interface that can be embedded in a web application or shared as a public URL. Portal workflows receive user messages, process them through AI and logic nodes, and stream responses back in real time.

Each portal instance is isolated with its own session state, making it suitable for customer-facing applications, internal support tools, and interactive demos. Portal sessions persist conversation history and support multi-turn interactions with context carried across turns.

The Evaluation Suite lets you measure workflow quality systematically. Define a dataset of input-output pairs, select LLM judge models, and run the suite against any workflow version. Heym records pass rates, latency distributions, token costs, and judge scores per item in a dashboard.

Evaluation results are versioned so you can compare performance across workflow iterations, prompt changes, or model upgrades before promoting a new version to production.

Deployment options include Docker Compose for single-server installations and Kubernetes Helm charts for cloud-native deployments. The Docker Compose setup starts PostgreSQL, the FastAPI backend, and the Vue.js frontend with a single command.

The backend exposes a REST API at /docs plus SSE-backed streaming endpoints for real-time workflow progress. Horizontal scaling is supported by running multiple backend replicas behind a load balancer, with execution state stored in PostgreSQL and Redis for cross-replica coordination.

Teams and credential sharing allow organizations to collaborate on workflows securely. Credentials such as API keys, OAuth tokens, SMTP passwords, and database connection strings are stored encrypted in PostgreSQL and referenced by name in node configurations.

Team members with the appropriate role can use shared credentials without seeing the underlying secret values. Workflow ownership, edit permissions, and execution access are controlled per team, enabling safe delegation of automation development across large engineering organizations.