Unified Input Lifecycle
All execution within Jet-Admin follows a centralized, contract-driven input lifecycle architecture. This standardizes how execution contracts are defined, how input values are resolved, and how templates are safely evaluated.
Core Principleβ
Execution Contract (Input Definitions)
+
Input Provider Values
=
Resolved Execution Inputs
Where:
- Execution Contract: Defines what inputs exist and their expected types.
- Input Providers: Supply the raw runtime values.
- Resolver: A unified pipeline that prepares and validates inputs for execution.
1. Execution Contractβ
The Execution Contract defines the exact inputs required to execute a unit (Workflow, Query, Cron Job, etc.).
| Executable | Contract Storage |
|---|---|
| Workflow | workflowOptions.args |
| Data Query | dataQueryOptions.args |
Each required input is defined as an InputDefinition:
{
key: string;
type: "string" | "number" | "boolean" | "object" | "array";
required: boolean;
default?: any;
supportsTemplate: boolean;
definitionSource: "native" | "derived";
}
Definition Sourcesβ
Definitions are extracted via the DefinitionProvider utility. There are two origins for definitions:
- Native: Defined directly on the executable (e.g., Workflow inputs, Data Query parameters).
- Derived: Inherited from another executable.
- Node: Inherits from its linked Query.
- Widget: Inherits from its linked Workflow.
- Cron Job: Inherits from its linked Workflow.
2. Input Providersβ
Input Providers supply the raw, unvalidated values (often called inputArgs) that attempt to satisfy the Execution Contract.
| Provider / Trigger | Supplies Input Values To |
|---|---|
| Widget Config | Workflow Execution |
| Cron Job Config | Workflow Execution |
| API / Manual Run | Workflow / Query Execution |
| Node Config | Data Query Execution |
3. The Two-Stage Resolution Pipelineβ
Jet-Admin uses a mandatory Two-Stage Pipeline to resolve, validate, and safely inject values. This pipeline completely eliminates fragmented module-specific input logic.
Stage 1: The InputResolver Pipelineβ
All runtimes invoke the centralized resolveInputs() pipeline before execution begins.
Pipeline Execution Order:
- Fetch Definitions: Derives the canonical
InputDefinition[]from the target entity. - Resolve Templates: If
supportsTemplateis true, evaluates context references (e.g.{{ctx.input.userId}}β123). - Apply Defaults: Injects default values if the runtime value is missing.
- Coerce Types: Safely coerces the value to the declared type (e.g. string
"123"to integer123). - Validate Required: Ensures all
required: truefields are populated.
Failure at Stage 1 immediately aborts execution and throws a precise validation error.
Stage 2: Engine-Specific Injectionβ
Once the InputResolver returns a safe, validated, and typed set of resolvedInputs, the specific execution engine takes over.
For example, a Data Query:
The QueryEngine receives the resolvedInputs and performs a secondary resolveTemplate sweep directly against the SQL query body, safely injecting the typed execution arguments into the database driver.
System Data Flowβ
Architectural Rulesβ
To maintain system integrity, the following rules are strictly enforced across the backend and frontend:
- Definitions Determine Behavior: Modules do not guess input shapes; they blindly follow the Definitions.
- Centralized Resolution: Runtimes (Workflows, Queries) must NOT resolve templates or validate inputs independently. They must use the
resolveInputs()pipeline. - Execution Receives Safe Inputs: Execution functions (like
startWorkfloworexecuteDataQuery) assumeinputArgsare already fully validated by the pipeline.
Execution Scenariosβ
This section outlines the chronological function invocations and Mermaid sequence diagrams for different execution scenarios across Jet-Admin, specifically highlighting the Unified Input Lifecycle (Stage 1 and Stage 2 resolution).
Scenario 1: Testing an Unsaved Data Queryβ
When a user clicks "Run" in the Data Query editor before saving.
Chronological Invocation:
dataQuery.controller.js: testDataQuery(req, res)dataQuery.service.js: runDataQueryByData(tempQuery, inputArgs)definitionProvider.util.js: extractQueryDefinitions(tempQuery)(ExtractsdataQueryOptions.argsdirectly from the payload request)inputArgs.util.js: resolveInputs(type: 'query', definitions, runtimeValues)β (Stage 1 Pipeline)- Resolves templates against context (if provided)
- Applies default values
- Coerces to declared types (e.g. string
"123"to integer123) - Validates required fields
queryExecution.adapter.js: executeDataQuery({ executionArgs: resolved })engine.js (QueryEngine): run(executionArgs)engine.js (QueryEngine): resolveTemplate(queryBody, executionArgs)β (Stage 2 Pipeline)- Injects the validated arguments directly into the SQL string or JSON body.
[Specific DB Adapter]: execute()
Scenario 2: Running a Saved Data Queryβ
When a query is executed via its API endpoint or triggered standalone.
Chronological Invocation:
dataQuery.controller.js: runDataQuery(req, res)dataQuery.service.js: runDataQueryByID(dataQueryID, inputArgs)prisma.tblDataQueries.findUnique(dataQueryID)(Fetches the saved query config)definitionProvider.util.js: extractQueryDefinitions(savedQuery)inputArgs.util.js: resolveInputs(type: 'query', definitions, runtimeValues)β (Stage 1 Pipeline)queryExecution.adapter.js: executeDataQuery({ executionArgs: resolved })engine.js (QueryEngine): run(executionArgs)engine.js (QueryEngine): resolveTemplate(queryBody, executionArgs)β (Stage 2 Pipeline)
Scenario 3: Testing/Running a Workflow (with Query & JS Nodes)β
When a workflow triggers, evaluating a Data Query node followed by a Javascript logic node.
Chronological Invocation: (Workflow Initialisation)
workflow.controller.js: testWorkflow()/executeWorkflow()workflow.service.js: testWorkflow()/executeWorkflow(inputArgs)inputArgs.util.js: resolveInputs()(ForexecuteWorkflowonly: validates initial workflow-level arguments againstworkflowOptions.args)orchestrator.js: startWorkflow()(Stores validated inputs in initial state)
(Node 1: Data Query Node)
5. taskWorker.js: processTask(dataQueryNode)
6. resolver.js: resolveTemplate(nodeConfig, workflowContext) β (Stage 1 for Nodes)
- Resolves dynamic mappings like
{{ctx.input.userId}}into actual values based on the current workflow state.
dataQueryHandler.js: process()dataQuery.service.js: runDataQueryByID(nodeConfig.dataQueryID, resolvedNodeArgs)- β Falls back into Scenario 2 flow.
- Calls
resolveInputsagainst query definitions to ensure the node passed the correct data types. - Stage 2:
QueryEngine.resolveTemplateinjects data into SQL.
orchestrator.js: handleTaskResult()(Saves query result to state, triggers next node)
(Node 2: Javascript Node)
10. taskWorker.js: processTask(jsNode)
11. resolver.js: resolveTemplate(nodeConfig, workflowContext) (Injects previous query results into the JS node variables)
12. jsHandler.js: process() (Executes the sandboxed JS code via isolated-vm)
13. orchestrator.js: handleTaskResult() (Saves JS output, ends workflow)
Scenario 4: Cron Job Triggering a Workflowβ
When node-cron fires on a schedule.
Chronological Invocation:
node-crontrigger fires.cronJob.service.js: runCronJob({ cronJob })definitionProvider.util.js: extractWorkflowDefinitions(cronJob.tblWorkflows)(Gets required workflow inputs).inputArgs.util.js: resolveInputs(runtimeValues: cronJob.workflowConfig.inputArgs)- Applies defaults and guarantees the static cron payload is valid for the linked workflow.
- If invalid, creates a
FAILEDhistory record immediately.
workflow.service.js: executeWorkflow(workflowID, resolvedArgs)executeWorkflowsafely re-verifies viaresolveInputs(idempotent step).
orchestrator.js: startWorkflow()β Starts regular workflow execution (matches Scenario 3).