Retrieval that knows what changed five seconds ago.
Vector-DB RAG is stale, flat, and context-blind. TwinGraph replaces it with a live, persistent graph that AI agents query over gRPC or MCP — so retrieval reflects the current state of the world, not last night's batch.
Traditional RAG flattens your knowledge into embeddings and reindexes on a schedule. It can't represent relationships, can't react to live telemetry, and can't distinguish 'yesterday' from 'five seconds ago'. For operational AI, that gap is a hallucination waiting to happen.
TwinGraph is a live in-memory graph where nodes are documents, entities, sensors, or events — and edges carry semantics. MQTT and gRPC ingest mutate the graph in real time; state auto-persists. Agents query it directly over gRPC, or through the companion twingraph_mcp_server for MCP-native clients — so every answer is grounded in current state.
A few steps. Real infrastructure.
Model your domain
Declare entities and relationships. Nodes can be anything — docs, users, sensors, tickets.
Stream live events
MQTT, webhooks, or direct gRPC calls mutate the graph as events arrive.
Connect your agents
Query the gRPC API directly, or run the companion twingraph_mcp_server to give MCP-native clients typed access to the graph.
Agents reason on it
LLMs retrieve subgraphs and traverse relationships instead of ranking flat chunks.
Real live graph RAG.
import lucidtc_twingraph as tg
graph = tg.RemoteTwinGraph(
twingraph_id="prod-ops",
server_address="twingraph.prod:50051",
secure_channel=True,
auth_token=TWINGRAPH_SERVER_AUTH_TOKEN,
)
# ingest a live event as a graph node
graph.add_twingraph_node(
twingraph_node=tg.TwinGraphNode(
node_id="ticket_9821",
node_type="Ticket",
payload={"severity": "P1", "service": "checkout", "status": "open"},
),
connection_type="AFFECTS",
source_twingraph_id="prod-ops-node-service_checkout_1",
)
# a Vertex AI agent queries the live graph via twingraph_mcp_server
nodes = graph.find_nodes_by_type("Ticket")
print(len(nodes), "open tickets grounding the next answer")- 01
Graph-native retrieval captures relationships flat embeddings lose.
- 02
Live updates mean answers reflect current state, not last night's index.
- 03
Talk to the graph over gRPC directly, or via MCP through the companion server — no vector store to babysit.