lucid tc | twingraph
Solutions · Live Graph RAG

Retrieval that knows what changed five seconds ago.

Vector-DB RAG is stale, flat, and context-blind. TwinGraph replaces it with a live, persistent graph that AI agents query over gRPC or MCP — so retrieval reflects the current state of the world, not last night's batch.

Topology
Data SourcesMQTTTwinGraph ServerMCPLLM Agent
The problem

Traditional RAG flattens your knowledge into embeddings and reindexes on a schedule. It can't represent relationships, can't react to live telemetry, and can't distinguish 'yesterday' from 'five seconds ago'. For operational AI, that gap is a hallucination waiting to happen.

The TwinGraph angle

TwinGraph is a live in-memory graph where nodes are documents, entities, sensors, or events — and edges carry semantics. MQTT and gRPC ingest mutate the graph in real time; state auto-persists. Agents query it directly over gRPC, or through the companion twingraph_mcp_server for MCP-native clients — so every answer is grounded in current state.

How it works

A few steps. Real infrastructure.

01

Model your domain

Declare entities and relationships. Nodes can be anything — docs, users, sensors, tickets.

02

Stream live events

MQTT, webhooks, or direct gRPC calls mutate the graph as events arrive.

03

Connect your agents

Query the gRPC API directly, or run the companion twingraph_mcp_server to give MCP-native clients typed access to the graph.

04

Agents reason on it

LLMs retrieve subgraphs and traverse relationships instead of ranking flat chunks.

Capabilities used
gRPC APIMCP-compatibleVertex AI AgentsMQTTNeo4j + BigQuery export
Example code

Real live graph RAG.

python · live-graph-rag.pylive
import lucidtc_twingraph as tg

graph = tg.RemoteTwinGraph(
    twingraph_id="prod-ops",
    server_address="twingraph.prod:50051",
    secure_channel=True,
    auth_token=TWINGRAPH_SERVER_AUTH_TOKEN,
)

# ingest a live event as a graph node
graph.add_twingraph_node(
    twingraph_node=tg.TwinGraphNode(
        node_id="ticket_9821",
        node_type="Ticket",
        payload={"severity": "P1", "service": "checkout", "status": "open"},
    ),
    connection_type="AFFECTS",
    source_twingraph_id="prod-ops-node-service_checkout_1",
)

# a Vertex AI agent queries the live graph via twingraph_mcp_server
nodes = graph.find_nodes_by_type("Ticket")
print(len(nodes), "open tickets grounding the next answer")
Why TwinGraph
  • 01

    Graph-native retrieval captures relationships flat embeddings lose.

  • 02

    Live updates mean answers reflect current state, not last night's index.

  • 03

    Talk to the graph over gRPC directly, or via MCP through the companion server — no vector store to babysit.

See this running on your infrastructure.