Twin

Your dedicated database, built into the platform

Provision a Postgres database per customer, define schemas, sync data from external systems, and expose it all through a REST gateway — no infrastructure to manage.

Twin database dashboard
How it works

Four steps to a live data layer

From empty database to queryable API in minutes.

Provision

Spin up a dedicated Postgres instance per customer with one click.

Create Tables

Define schemas for loads, orders, carriers, or any domain entity.

Connect Data

Poll external systems on a schedule or dump data from workflow runs.

Query & Expose

Run SQL directly or expose tables through an auto-generated REST API.

Schema Design

Define your schema or let the platform build it

Create typed tables with primary keys, defaults, and constraints — or push data and let the schema be inferred automatically.

Typed Columns

Text, integer, boolean, timestamp, JSON, UUID, and float — every column is strictly typed.

Row Operations

Insert, update, upsert, and delete rows via the UI, SQL, or the REST API.

SQL Views

Create read-only views that join, filter, or aggregate across tables.

Auto-Table Creation

Push data from a workflow and the platform creates the table and schema automatically.

Create Table
twin.loads
ColumnTypeDefault
idPKuuid
origintext
destinationtext
statustext'pending'
weight_kgfloat
created_attimestampnow()
metadatajson
Connect Data

Keep your database in sync — automatically

Pull data from external systems on a schedule or push it from workflow runs. Every row is tracked and auditable.

Scheduled Sync

Poll TMS, ERP, or any API on a cron schedule with cursor-based pagination.

Workflow Dumps

Log call transcripts, order updates, and audit trails directly from workflow runs.

Audit Trails

Every write is timestamped and traceable — full history of what changed and when.

# data-polling.yml
source: tms_api/loads
schedule: every 5m
cursor: updated_at
target: twin.loads
idoriginstatus
4821Dallas, TXin_transit
4822Chicago, ILdelivered
4823Miami, FLpending
Monitoring

Know exactly what your data is doing

Track every sync job, inspect failures, and backfill on demand.

Status Tracking

Every sync job moves through a clear lifecycle — pending, running, completed, or failed — so you always know where things stand.

Manual Backfill

Re-run any sync from a specific cursor or timestamp to patch gaps without duplicating data.

Sync Metrics

Track rows synced, job duration, error rates, and last-sync timestamps across all connected sources.

Query & Expose

SQL when you need it, REST when you don't

Write queries in the built-in editor or let PostgREST generate endpoints for every table automatically.

  • SQL Editor — write and test queries directly in the platform
  • PostgREST Gateway — every table gets an auto-generated REST endpoint
  • Row-level security — scope access per customer or API key
  • Joins & views — combine tables and expose computed datasets
-- SQL Editor
SELECT id, origin, status
FROM loads
WHERE status = 'in_transit'
ORDER BY updated_at DESC;
# PostgREST Gateway
GET /rest/v1/loads?status=eq.in_transit
→ 200 OK  ·  42 rows  ·  12ms
Infrastructure

Managed Postgres, zero ops

Every customer gets a dedicated, encrypted Postgres instance with serverless access — no infrastructure to manage.

  • Dedicated RDS — each customer gets an isolated Postgres instance on AWS RDS
  • Encrypted credentials — connection strings are encrypted at rest and in transit
  • Status lifecycle — instances move through provisioning, active, paused, and terminated states
  • Fargate gateway — PostgREST runs on serverless containers with auto-scaling
  • JWT auth — every API request is authenticated and scoped with signed tokens
Instance Status
Active
orgacme-logistics
regionus-east-1
enginepostgres 15.4
tables12
host••••••.us-east-1.rds.amazonaws.com
password••••••••••••

Start building your data layer today.