GraphAware Empowers Intelligence Analysts With AI Assistant Called Maestro

April 24, 2025 · 4 min read

AI assistant Maestro lets intelligence analysts explore their data using natural language—without compromising client data security

GraphAware Hume has introduced a substantial new feature in its 2.26 release. It’s an AI assistant called Maestro which creates a groundwork for analysing knowledge graphs using LLMs. Using natural language questions instead of queries, the analysts can now quickly explore the knowledge graph to uncover new insights, generate code snippets, summarise important facts or get in-depth contextual help while working with GraphAware Hume.

“Our vision is to empower analysts to efficiently do the work they excel at. With Maestro, they don’t need to spend time on routine tasks that the machine can do for them,” explains Christophe Willemsen, Chief Technology Officer at GraphAware. He adds that LLM-based systems introduce new security challenges, and the team takes every precaution to protect client data and mitigate hallucinations. Graphs and LLMs are a natural combination for retrieving accurate answers: the structured information in the graphs provide a framework for LLMs to keep their answers in line with the data.

Security is a priority

Due to the sensitive nature of the work of law enforcement and intelligence agencies, the primary focus when designing Hume Maestro has been security. “GraphAware Hume is designed with a tight focus on access control and security in mind and Maestro conforms to that vision,” adds C. Willemsen. Even if using LLM to query data in the knowledge graph, the users will not be able to get to the data they shouldn’t see.

At the same time, confidential data from the organisation will not be leaked through prompts to the online LLMs – a significant worry for agencies working with sensitive information. “We are employing PII identification and anonymisation techniques to comply with security requirements of our clients,” C. Willemsen explains.

Confidence in the answers from an LLM is crucial, so a lot of care has been taken to prevent and avoid the tendency of LLM models to hallucinate results. It’s a known fact that LLMs are eager to provide positive answers even if it means they make it up. This tendency can be taken care of at the prompt engineering stage so that Maestro doesn’t provide made up answers. 

AI Assistant for Hume

Maestro adds the ability to directly interact with the LLMs of the client’s choice. Both online (OpenAI and Azure) and on-premise models (Ollama) are already supported.  Other LLMs are easy to add, including internal models trained and hosted by the client.

In its beta version, Hume Maestro already contains the documentation of GraphAware Hume, helping the analysts quickly check the specifics for the function or feature they want to use, including specific code suggestions when designing the layout of Action Boards or Python scripts and Cypher queries. All the usual LLM features such as summarisation and translation of content are of course available as well. 

With this groundwork in place, the plan for the upcoming months is to allow clients to add their analytical and other documentation as well, having all the necessary organisational reference at hand at all times. Maestro can also produce a high level report on the current state of an ongoing analysis. Producing reports is a time-consuming task for analysts and the LLMs are capable of doing most of the heavy lifting.

Talk to your data

The activity where the hallucinations would hurt the most is interacting with a knowledge graph. Maestro now allows the users to talk to their graphs using natural language instead of queries, quickly retrieving necessary information from the existing data. This gives access to the intelligence to people not familiar with the query language such as police officers.

They can ask for specific information about a suspect, a vehicle or an address and the LLM will provide the answer in seconds without the need for the users to consult the graph or know the query language. They can quickly see suspect’s connections to other people, usual places where he can be found during specific times of the day and much more. Even in large knowledge graphs, retrieving information about indirect or complex relationships takes seconds.

While Maestro is still in beta, it’s already a powerful tool with lots of features being added with every release of GraphAware Hume.