lucid tc | twingraph

IT. OT. AI.Your Infrastructure is a living graph. Run it like one.

TwinGraph is the live graph server built to do so.

Your systems, your AI agents, and your data all depend on each other but nothing in your stack actually knows it. TwinGraph does, in real time.

Why a TwinGraph?

You don't have an orchestration problem. You have a shape problem.

Most teams respond by adding another pipeline, another sync job, another dashboard. But the problem isn't missing infrastructure, it's that none of it holds a unified, live picture of what's actually happening. TwinGraph is built on a different premise: give your operations a runtime that mirrors their real structure, and “what's true right now” finally has a single answer.

The problem
Your operational reality is already a graph.
Machines, agents, and queries all depend on each other but your stack stitches them together with pipelines and dashboards that are always stale somewhere.
The shape
A graph whose nodes are live is the integration layer.
When relationships are first-class and nodes are running objects, the graph stops being a picture of your system and starts being the system.
The payoff
Every signal is traceable by default.
Observability isn't a layer you bolt on, it's the topology itself. Walk the graph to trace any value to its source and see what's affected downstream.
The adoption cost
If your team writes Python, they can run a TwinGraph.
A Python SDK your engineers already know, with optional publishing to your graph database. TwinGraph doesn't replace your stack, it gives it a living backbone.
The Model

A server that hosts graphs. Graphs whose nodes are running code.

TwinGraph Server is a persistent gRPC service that hosts one or more TwinGraphs: in-memory, Rust-powered graphs identified by ID. Each TwinGraph is composed of TwinGraphNodes, and those nodes are more than stored data. A node can hold a living object that can connect to data streams, call an AI agent, query data stores, or invoke a serverless function in response to graph activity, turning the graph itself into an active runtime for connected systems and AI workflows.

01 — SERVER
TwinGraph Server
An infrastructure and platform agnostic gRPC + TLS service. Hosts many TwinGraphs concurrently, auto-saves their state, and exposes everything through authenticated gRPC, REST, and WebSocket interfaces.
02 — GRAPH
A TwinGraph
An in-memory, Rust-powered graph, identified by a twingraph_id. Holds the nodes and their relationships for a single system, plant, cloud, or tenant.
03 — NODE
A live programmatic object
Every node is a TwinGraphNode subclass. It stores data, runs code, and reaches out to the systems it represents — data streams, data stores, serverless functions, AI Agents, APIs, and more.
TWINGRAPH SERVERgRPC · TLS · persistent · multi-graphTWINGRAPH: plant-kcMQTTBrokerNodemqtt.plant.local:8883MQTTTopicNodeline01/machine02MQTTTopicNodeline01/machine03TwinGraphNodemachine-02VertexAIAgentNodesre-agent-01BigQueryNodeanalytics.telemetryTWINGRAPH: cloud-prodAWSLambdaNodedata-transform-fnAWSLambdaNodealert-handlerCloudRunFunctionNodeingest-serviceVertexAIAgentNodeops-agentGraphStoreNeo4jNodeneo4j.prod:7687IngestAI AgentActionDatastorePhysical Component
Integrations

The protocols and services your stack already speaks.

MQTTNeo4jBigQueryPub/SubVertex AIGeminiGoogle ADKMCPgRPCAWS LambdaCloud RunTerraformKafkaRedisPostgreSQLSnowflakeAzure FunctionsOpenAILangChainDataflowCloud StorageS3+ many more
TWINGRAPH SDK

Define nodes. Add them to the graph. They start doing work.

The lucidtc_twingraph SDK is how you talk to the server. Connect with RemoteTwinGraph, add typed nodes, and the moment they're on the graph they can stream telemetry, run agents, and call out to the systems they represent.

python · plant_floor.py
import lucidtc_twingraph as tg

# connect to a graph hosted on a TwinGraph Server
graph = tg.RemoteTwinGraph(
    twingraph_id="plant-kc",
    server_address="twingraph.prod:50051",
    secure_channel=True,
    auth_token=TWINGRAPH_SERVER_AUTH_TOKEN,
)

# drop a live MQTT broker onto the graph
broker = tg.MQTTBrokerNode(
    broker_host="mqtt.plant.local",
    broker_port=8883,
    use_tls=True,
    creds=MQTT_BROKER_CREDS,
)
graph.add_mqtt_broker_node(broker, auto_connect=True)

# subscribe a topic — telemetry now flows into child nodes
graph.subscribe_to_topic(
    broker_twingraph_id=broker.twingraph_id,
    topic="plant/kc/line01/machine02",
    qos=2,
)

# add a Vertex AI agent that reasons over the live graph
# note the tools the agent has access to are added as nodes as well
agent = tg.VertexAIAgentNode(
    node_id="sre-agent-01",
    project_id="my-gcp-project",
    location="us-central1",
    resource_id="2038243770011288214",
)
graph.add_twingraph_node(twingraph_node=agent)
TwinGraph Browser showing the plant-kc graph built by the code above: an MQTT broker node, MQTT topic children streaming plant telemetry, and a sre_agent Vertex AI agent node with tool children.
TwinGraph Browser

See every node, every edge, every event — live.

TwinGraph Browser is a standalone management interface for your running TwinGraph Server. Connect to any hosted or local TwinGraph Server to build and modify graphs, manage MQTT brokers and topics, inspect node state, and watch live telemetry stream through the topology. Includes an interactive graph visualization and a built-in Python console for direct server access.

Full TwinGraph Browser interface showing a large live graph with interconnected nodes, telemetry streams, and agent activity.
Get started

Ready to see your infrastructure as a living graph?