Workflow Architecture
This page documents the current Jet Admin workflow runtime: a DAG-oriented execution engine that runs inside the backend process and uses an in-memory queue adapter built on fastq.
High-level modelβ
The workflow engine follows a check-decide-act loop:
- check what node just finished,
- decide which downstream nodes are eligible,
- act by queuing the next node jobs or completing the run.
Important runtime clarificationβ
Jet Admin uses an in-memory queue (fastq), not RabbitMQ.
The workflow runtime is initialized from:
apps/backend/config/queue.config.js- In-memory queue using fastqapps/backend/workers/workflowWorkers.js- Worker initializationapps/backend/workers/taskWorker.js- Task processorapps/backend/workers/resultsConsumer.js- Results orchestrator
This means:
- β No external broker needed - Everything runs in-process
- β Simpler deployment - No RabbitMQ container or service required
- β Faster development - Direct function calls, no network overhead
- β οΈ Process-local queues - Queue state is lost on restart
- β οΈ Limited horizontal scaling - Tasks don't distribute across instances
Core runtime componentsβ
Workflow serviceβ
workflow.service.js is the API-facing service layer for workflow definition CRUD and run orchestration.
Important operations include:
executeWorkflowβ starts a persisted workflow run,testWorkflowβ starts an unsaved test run from client-provided nodes and edges,stopTestWorkflowβ removes a test instance,- run-status retrieval for standard UI and widget-driven views.
Workflow workers bootstrapβ
workflowWorkers.js initializes the active queue runtime and starts:
- the task worker,
- the results consumer/orchestrator loop,
- graceful queue shutdown hooks.
Queue adapterβ
apps/backend/config/queue.config.js provides queue semantics using fastq and EventEmitter.
It exposes queue-like operations such as:
initializeQueuecloseQueueaddNodeJobaddResultregisterTaskWorkerregisterResultsWorkerpublishToMonitor
The public shape intentionally mirrors the older AMQP-oriented abstraction so the workflow layer stays decoupled from the transport implementation.
Task workerβ
workers/taskWorker.js is responsible for executing node jobs.
It:
- receives a task payload,
- resolves the handler from
nodeType, - executes the handler with workflow context helpers,
- publishes either success or error results,
- retries failed jobs with exponential backoff by re-enqueuing them with delay.
Orchestratorβ
orchestrator/orchestrator.js is the coordination layer.
It:
- consumes node results,
- updates workflow instance state and context,
- emits node/status websocket updates,
- determines downstream nodes using the DAG,
- queues additional work or marks the workflow as complete.
State manager and schedulerβ
The workflow engine separates responsibilities:
- state management persists context and execution state,
- DAG scheduling decides which next edges/nodes should fire,
- workers remain focused on isolated node execution.
Execution lifecycleβ
Production runs vs test runsβ
The engine supports two important modes.
Production runβ
- workflow definition is read from persisted records,
- node/edge graph comes from the database,
- the run behaves like a normal saved workflow execution.
Test runβ
- the frontend can send unsaved nodes and edges,
- those definitions are stored in context using internal markers such as
__workflowDefinitionand__isTestRun, - the orchestrator resolves downstream nodes from the in-memory graph instead of reloading from the database,
- this enables iteration in the editor before saving the workflow.
Context modelβ
The workflow instance stores a mutable JSON context that acts as shared execution state.
Typical contents include:
inputfor initial arguments,- per-node outputs keyed by node ID,
- internal metadata for test runs,
- current execution status and supporting state used by downstream nodes.
Conceptually, the context evolves like this:
{
"input": { "customerId": 42 },
"startNode": { "success": true },
"fetchDataNode": { "rows": [{ "id": 1 }] },
"transformNode": { "count": 1 }
}
Node execution modelβ
Each task job typically contains enough information for stateless execution:
- workflow instance ID,
- workflow ID,
- node ID,
- node type,
- node configuration,
- context snapshot,
- retry metadata.
The worker then dispatches to a handler that knows how to execute that node type.
Examples of node behavior include:
- JavaScript execution,
- data-query or datasource-driven actions,
- control-flow decisions,
- integration actions,
- widget-aware workflow bridging.
Realtime feedbackβ
Workflow execution is tightly integrated with Socket.IO.
The orchestrator emits updates for:
- individual node status changes,
- overall workflow status changes,
- widget-workflow integration state where applicable.
This is what allows the frontend to subscribe to a run and show near-live progress without polling every internal step.
Retry behaviorβ
Retries are implemented in the current queue adapter using delayed re-enqueueing rather than broker-managed delayed queues.
In practice this means:
- failed tasks can be retried with exponential backoff,
- delays are implemented with in-process timers,
- queue names such as
workflow.tasks.dlqare preserved for compatibility even though the runtime is in-memory.
Frontend relationshipβ
The workflow UI depends on several frontend/package pieces:
- React Flow for graph editing,
@jet-admin/workflow-nodesfor node definitions and editors,@jet-admin/workflow-edgesfor custom edge rendering,- JSON Forms-based configuration surfaces,
- socket subscriptions for live run feedback.
Extending the workflow engineβ
Adding a new node type usually requires work in two places.
Frontend/package sideβ
- add the node definition and visual/configuration UI in
packages/workflow-nodes, - register the node so it appears in the editor and inspector UI.
Backend sideβ
- add the corresponding execution handler under the workflow worker handlers,
- ensure the handler returns the expected result shape,
- support any required validation or helper resolution.
The queue/orchestrator layer usually does not need structural changes because it dispatches dynamically by nodeType.
Operational trade-offs of the current designβ
Advantagesβ
- simple local development,
- fewer moving parts than a broker-based distributed system,
- fast feedback for editor test runs,
- low cognitive overhead while the feature set evolves.
Constraintsβ
- queue state is process-local,
- horizontal scaling semantics are more limited,
- delayed retries are timer-based,
- resilience characteristics differ from a dedicated broker topology.
Summaryβ
Jet Admin's workflow engine is currently a backend-embedded orchestration runtime with:
- persisted workflow state in PostgreSQL,
- in-memory task/result queues,
- dynamic node handlers,
- DAG scheduling,
- retry support,
- realtime socket updates,
- explicit support for unsaved test execution.
That is the model to use when reasoning about the present codebase.