Similarity is not truth
Retrieval ranks documents by how alike their embeddings are. Two passages can be nearest neighbors in vector space without being logically connected.
We invest in the building blocks that make AI useful for business — durable, auditable, and built to grow with your data. This page covers the in-house technologies we develop ourselves, starting with the Amicus Knowledge Graph.
Most enterprise AI today is built on retrieval-augmented generation (RAG). The idea is simple: when you ask a question, the system finds text chunks that look semantically similar to your query, hands them to a language model, and the model writes an answer.
For simple lookups — "what's our return policy?" — this works. But the moment your question requires structure, the cracks show.
Retrieval ranks documents by how alike their embeddings are. Two passages can be nearest neighbors in vector space without being logically connected.
Questions like "which of our suppliers source from sanctioned regions?" require following relationships across multiple entities. Chunked text can't traverse that.
Every document is just text. There is no way to enforce types, validate facts, or trace where a claim originated. Hallucinations slip through unnoticed.
You can't ask the system what it doesn't cover. The boundaries of the knowledge base are invisible.
These aren't edge cases — they're the questions enterprises actually need to ask.
Knowledge graphs solve these problems by giving information a shape. Instead of treating the world as a pile of text, a knowledge graph represents it as entities (companies, people, products), relationships between them (owns, supplies, employs), and an ontology — the schema that defines what types of things and connections are allowed.
This structure unlocks everything retrieval can't:
Walk relationships across the graph to answer compound questions that span multiple entities and types.
Data conforms to a schema; queries return validated facts, not pattern-matched guesses.
Every entity and relationship can carry a source — you always know where a claim came from.
The ontology is a map of what the system can and cannot represent — no more invisible knowledge gaps.
But traditional knowledge graphs come with their own pain. They demand an upfront schema. New concepts that weren't anticipated either get force-fit into wrong categories or quietly discarded. Maintaining them is slow, manual, and expensive.
For most teams, the trade-off has historically looked like this: structure or scale, pick one.
A knowledge graph that evolves with your data.
Amicus Knowledge Graph is built around a different premise: the schema itself should grow.
Rather than treating the ontology as a fixed artifact designed once and never touched, we treat it as a living structure that adapts as new data arrives. When the graph encounters a concept it can't represent — a new kind of organization, a new relationship type, an attribute it has never seen — the system doesn't ignore it. It generates a structured proposal: here is what I think we should add, why, and what data would populate it.
Humans review those proposals. Approved changes are applied to the schema atomically. Pending data waiting for the change is reprocessed automatically. The graph grows, but it never grows out of control.
This approach — proposal generation by AI, decision-making by humans — is the foundation. We call it ontology in the loop.
AI scans data and detects gaps in the schema. For each gap, it generates a structured proposal — what to add, why, and which data would benefit. Humans approve, reject, or modify. The schema only changes when a person says yes.
Production ontologies grow large — too large to fit into a single AI prompt. We organize the schema into semantic domains (Ownership, Supply Chain, Partnership, etc.) and route each task to only the slice that's relevant. The AI sees less, reasons better, and costs drop alongside.
A proposal isn't a sentence; it's a package. Schema changes, data to enrich the graph once approved, the source document, a confidence score, and a reasoning trail — all in one record. Reviewers see everything they need to decide.
Before a proposal reaches a human, the system checks whether
something semantically similar already exists. COOPERATES_WITH
and PARTNERS_WITH shouldn't be two separate types.
Vector similarity catches the duplication before it pollutes the
schema.
Production data is separated from pending proposals. Approved facts carry their source. Every schema change has a record of who proposed it, who approved it, and what data justified it. The graph is always inspectable, always reversible.
Amicus Knowledge Graph is the product of our published research — Ontology in the Loop: A Framework for AI-Assisted Knowledge Graph Evolution. The paper covers the full architecture, gap detection, proposal lifecycle, and human-in-the-loop workflow in technical detail.