IT. OT. AI.Your Infrastructure is a living graph. Run it like one.
TwinGraph is the live graph server built to do so.
Your systems, your AI agents, and your data all depend on each other but nothing in your stack actually knows it. TwinGraph does, in real time.
You don't have an orchestration problem. You have a shape problem.
Most teams respond by adding another pipeline, another sync job, another dashboard. But the problem isn't missing infrastructure, it's that none of it holds a unified, live picture of what's actually happening. TwinGraph is built on a different premise: give your operations a runtime that mirrors their real structure, and “what's true right now” finally has a single answer.
A server that hosts graphs. Graphs whose nodes are running code.
TwinGraph Server is a persistent gRPC service that hosts one or more TwinGraphs: in-memory, Rust-powered graphs identified by ID. Each TwinGraph is composed of TwinGraphNodes, and those nodes are more than stored data. A node can hold a living object that can connect to data streams, call an AI agent, query data stores, or invoke a serverless function in response to graph activity, turning the graph itself into an active runtime for connected systems and AI workflows.
twingraph_id. Holds the nodes and their relationships for a single system, plant, cloud, or tenant.TwinGraphNode subclass. It stores data, runs code, and reaches out to the systems it represents — data streams, data stores, serverless functions, AI Agents, APIs, and more.The protocols and services your stack already speaks.
Define nodes. Add them to the graph. They start doing work.
The lucidtc_twingraph SDK is how you talk to the server. Connect with RemoteTwinGraph, add typed nodes, and the moment they're on the graph they can stream telemetry, run agents, and call out to the systems they represent.
import lucidtc_twingraph as tg
# connect to a graph hosted on a TwinGraph Server
graph = tg.RemoteTwinGraph(
twingraph_id="plant-kc",
server_address="twingraph.prod:50051",
secure_channel=True,
auth_token=TWINGRAPH_SERVER_AUTH_TOKEN,
)
# drop a live MQTT broker onto the graph
broker = tg.MQTTBrokerNode(
broker_host="mqtt.plant.local",
broker_port=8883,
use_tls=True,
creds=MQTT_BROKER_CREDS,
)
graph.add_mqtt_broker_node(broker, auto_connect=True)
# subscribe a topic — telemetry now flows into child nodes
graph.subscribe_to_topic(
broker_twingraph_id=broker.twingraph_id,
topic="plant/kc/line01/machine02",
qos=2,
)
# add a Vertex AI agent that reasons over the live graph
# note the tools the agent has access to are added as nodes as well
agent = tg.VertexAIAgentNode(
node_id="sre-agent-01",
project_id="my-gcp-project",
location="us-central1",
resource_id="2038243770011288214",
)
graph.add_twingraph_node(twingraph_node=agent)
See every node, every edge, every event — live.
TwinGraph Browser is a standalone management interface for your running TwinGraph Server. Connect to any hosted or local TwinGraph Server to build and modify graphs, manage MQTT brokers and topics, inspect node state, and watch live telemetry stream through the topology. Includes an interactive graph visualization and a built-in Python console for direct server access.

Built for real operational problems.
Live Graph RAG
Retrieval that knows what changed five seconds ago. A vector DB answers from the last batch job; a live graph answers from the current state of your systems because the nodes are the systems.
Cloud Observability
Your cloud, modeled as a living graph. Dashboards show metrics; the graph shows which service depends on which queue, which Lambda, which warehouse — and lets an agent walk those edges to find root cause.
IoT & Telemetry
A digital twin that updates in real time and remembers. Plants, lines, and machines are nodes; telemetry flows onto them; history is a property of the thing, not a separate time-series silo.
Gemini Enterprise
AI connected to reality - not a siloed database, API, or knowledge base. Leverage Gemini Enterprise to securely distribute real-world intelligence across your Organization.