Gods and Lies: Where Should We Integrate?

We've seen countless embarrassing examples of poorly thought out LLM integrations, primarily through rogue Chatbots with access to both company and user data. While everyone is rushing to put LLMs here-there-and-everywhere, what is a use case that actually provides value and enhances the stability of applications and platforms.

Rigid frameworks are great for reliable inputs and outputs, but when something isn't as expected things can go really wrong very quickly. It's certainly possible to hard-code a few solutions to normalise known and likely outliers, but LLMs present us with a completely novel approach of dynamic error detection, correction and improvement of inputs.

Complaints about incorrect information and 'hallucinations' (many loathe the term) are rife amongst what some have labelled 'legacy' engineers, who are either struggling to adapt, or are remaining sensibly conservative waiting for the inevitable failure of the technology depending on your perspective. A peaceful and loving middle ground can surely be reached by appropriately limiting the scope of Agent's abilities while still including them in the loop (for the purposes of this article an agent is an instance of an LLM with access to at least one tool). With the right guidance, documentation and oversight it is possible to reduce hallucination and damaging outputs significantly if not entirely.

When beginning with security and LLM projects I started by (on a virtual machine and on a local network) giving the agent raw terminal access. This approach took me some way but quickly with the addition of just one or two more steps things fell apart quickly. The key in my experience is to limit the scope in two ways. By casting the LLM in a management role that calls more traditional coded tools or applications, and by allowing it to operate and modify data in a sandwich of control made up of very specific input sources and output destinations. The future is in the intermediary frameworks that glue the stack together.

LLMs are not gods...yet, neither are they useless lie machines that can contribute nothing. To use a cliche from meetings you clocked out of 45 minutes ago "It's all a question of balance", and you can safely ignore anyone who drags you to hell with either extreme.

> End of output.