About

News

Log in

Try Desia

How agentic coding tools are transforming productivity at Desia

Desia Team

Company

January 14, 2026

At Desia, AI is not something added on top of the product. It is part of the environment in which the product is built.

Last year, the development workflow moved from occasional use of language models to working inside a more agentic setup, where tools can read and write files, run commands, inspect logs, and iterate over longer tasks. The result is not only faster delivery, but a shift in how problems are approached, how systems are designed, and how time is spent during development.

From prompts to systems

Early use of LLMs in software development was mostly prompt-based: ask a question, get an answer, copy some code. That approach works for isolated tasks, but it does not fit well with real systems, which involve logs, tickets, partial failures, long-running features, parallel branches, legacy components, and business constraints that all live outside any single prompt or short-lived chat window.

Agentic tools operate inside the same environment as the work itself. They can read and modify files, query databases, run scripts, and inspect production logs, while also maintaining continuity across longer tasks that span multiple sessions and code paths. Debugging becomes a single conversational loop over real data rather than a sequence of manual searches and copy-paste steps across multiple tools, and feature development can proceed in parallel contexts without constantly resetting what the system understands about the broader goal.

The toolchain in practice

Two tools sit at the centre of the workflow:

  • Claude Code for longer-running tasks such as debugging, architectural reasoning, and documentation that needs to stay in sync with the codebase.

  • VS Code / Cursor for full IDE features like navigation, refactoring, and careful edits, even though most AI-assisted work happens through Claude rather than IDE-embedded copilots.

Around these, CLI-first tools such as Gemini-CLI and Codex are used alongside internal wrappers that expose logs, repositories, and data directly to agents.

A layer of task systems and slash-command workflows structures how agents operate around planning, implementation, review, and documentation. This supports parallel work on multiple branches without losing context and enables automated code reviews that remove friction from most pull requests, freeing up time for testing and higher-level judgment.

What has changed

The biggest change is not just speed, but what becomes practical to do. Teams now routinely spin up throw-away scripts to validate assumptions, generate custom scrapers for filings or transcripts, and build small internal tools that would previously have taken days of manual effort. 

This has also changed how software evolves. Design decisions can be explored and compared before anything is built, and automated reviews catch many issues early. Instead of waiting or guessing, teams can test, inspect, and iterate quickly, shortening the time from a question to something concrete that can be examined or run.

Maintaining perspective

There are limits to this approach. LLM-generated code can be non-deterministic, and it is easy to ship something that appears to work without fully understanding what it implies for costs, rate limits, or business behaviour. Models will produce code on demand, but they will not automatically ask whether that code makes sense in the context of the wider system or the business it supports.

For that reason, fundamentals still matter. Even with highly capable agents, someone needs to understand the overall architecture, the trade-offs being made, and the downstream effects of each decision. The most effective use of these tools looks less like automatic code generation and more like disciplined pair-programming, where outputs are continuously reviewed, questioned, and aligned with a broader plan before anything is committed.

As agent output becomes cheaper, judgment and oversight become more important. Long interactions with agents in particular need to be handled carefully, so that the direction of the work does not drift away from deliberate design.

Why this matters for Desia

There is a difference between using an LLM and engineering an agentic system. While many teams rely on generic chat interfaces, we have built our own infrastructure, with custom tools, structured workflows, and persistent task systems designed to handle complex, long-running engineering work.

This directly shapes the product we build. The problems we have to solve internally—maintaining context over time, limiting hallucinations, and coordinating multi-step processes across real data—are the same problems our clients face in financial workflows. By relying on these agentic systems to build Desia itself, the platform is grounded in proven, reliable architecture, not just the latest hype.

This is still an evolving space, and many teams are exploring similar ideas. For us, the focus remains straightforward: use AI where it meaningfully helps, stay rigorous where it matters, and keep building systems we can trust.

Every AI-assisted commit is just one small step, but together they are helping us build better software for the financial work ahead.

See it in action

If you work in financial services and are curious how this approach translates to real financial workflows, get in touch so you can see Desia in action.

AI Search unlocks the Full Potential in your Data

Our large-scale natural language processing allows you to search, generate documents,and uncover answers to complex questions in minutes.

In Q3 2023, Company XYZ faced a major data breach, exposing customer information. The incident led to a 30% churn rate among enterprise clients over the next two months. Consequently, annual recurring revenue dropped from $30M to $23M by year-end 2023.