Designing for change: volatility-based decomposition

The Seccl system has been designed and built using an architectural approach called ‘volatility-based decomposition’. This guide explain how it works, how it compares to other commonly used methods, and why we’ve chosen it...

Designing for change: volatility-based decomposition


From day one, the Seccl system has been designed and built using an architectural approach called ‘volatility-based decomposition’.

It might be an architecture that you’re unfamiliar with – and don’t panic if so! Many of our engineering team joined the business having never heard of it, let alone experienced it first-hand.

That’s where this short intro could help. In it we’ll explain how it works, how it compares to other commonly used methods, and why we’ve chosen it…

A note from Dave, our founder

This approach was largely inspired by the work produced by iDesign, a global software consultancy that Dave (our founder) worked with before he launched Seccl.

Having followed more traditional software architectural practices in the past – such as functional and domain-based architecture (we’ll cover those in a moment) – the volatility-based approach seemed a more suitable alternative that would allow us to navigate the unpredictability of our sector.

After all, the investment industry is particularly prone to change, thanks to a regulatory environment that’s as changeable as it is stringent. Indeed, adapting to this change isn’t optional – it’s essential to every firm’s license to operate. In other words, technology providers in this space have to quickly adapt to survive.

When it comes to software, then, it’s important that it’s designed not just to meet whatever requirements exist at the time of building, but to withstand the inevitable changes that will come our way. And that’s where volatility-based decomposition can help…

What is volatility-based decomposition?

Volatility-based decomposition – also known as the “iDesign method” – is a way of building solid architecture that will withstand the test of time. Put simply, it involves and anticipating change, rather than building technology to meet a set of static requirements.

While no one wants to believe that change will be dangerous or costly to their system, it’s pretty much a given in software engineering – particularly in our sector.

Volatility-based decomposition identifies these potential areas of change and packages them into services or building blocks, therefore limiting any damage caused by the change itself.

Diagram showing 9 safes, the middle safe is red

Showing it in action…

Say you have a room full of heavily armoured safes (as you do!). If a grenade is thrown into one of these safes, and then the door is quickly closed, the contents of that particular safe will be destroyed.

However, assuming it’s built of strong stuff, all the other surrounding safes will be left unharmed. The damage is contained in the single safe where the explosion happened, so the damage is minimised.

By applying the logic of volatility-based decomposition, you are acknowledging the potential dangers that come with change, and planning for a contained explosion.

Exploring the alternatives

To help you better understand our approach, let’s talk a bit about the methods we don’t use…

Functional decomposition

Firstly, functional decomposition is a very common engineering method – so much so that you’re probably already familiar with it. If so, you’ll know that it involves building a system around a series of functional requirements.

Because, again, it’s always helpful to provide an analogy, let’s consider a simplistic share trading system that has the following requirements:

  • Buy stocks
  • Sell stocks
  • Settle trades
  • Transfer stock in
  • Report trades
Diagram of functional decomposition

Of course, this is a pretty basic example, and in the real world, a stockbroker would have literally hundreds of functional requirements. But straight away, using a functional based decomposition throws up several challenges…

There are too many services

Paying out, processing dividends and corporate actions, regulatory reporting, managing counterparties, recording instrument data – all of these are services that will need to be integrated and tested, resulting in significant testing and deployment overhead.

There’s repetitive code across the system

There will also be repetitive code built across numerous services. For instance, logic to calculate charges (and therefore the amount payable on a trade) will be repeated across at least two services – buying stocks and selling stocks.

The client is responsible

In this case, it’s likely that the investor client would be responsible for orchestrating flows across services. Imagine if the investor wishes to sell one stock and use the proceeds to purchase another stock – the client would then need to make multiple calls to various services to perform this use case.

Your system is tied to fixed requirements

Perhaps most importantly of all, the system is now tied to the requirements at the time of designing the architecture. In fact, it’s likely that it will be out of date before the first line of code is even written.

In software development, one thing is certain: requirements change, and often in a short space of time, say where the use case hasn’t been understood or a client has changed their mind. Therefore, the results of functional architecture are often painful and expensive when change inevitably happens.

Diagram showing functional decomposition

Domain decomposition

Another alternative to volatility-based decomposition is domain decomposition. Using this method, services are arranged around areas of specific domain logic. In our simple stockbroking example, these might include the likes of “trading” or “transferring”.

In many respects, domain decomposition is functional decomposition in disguise.

As with the examples above, there are too simply too many services to cover all the domains of a real-world broker, and the same issues of repetitive code and client orchestration will lead to bad outcomes when change inevitably happens.

The services used in a domain composition may well end up bloated in size with multiple entry points. For example the Trading service could have the following methods following the change to allow fund trading:

  • buyStocks
  • sellStocks
  • buyFunds
  • sellFunds

Now consider the possibility of the broker offering access for investors to bonds, ETFs, investment trusts or even derivatives and CFDs.

Can functional and domain methods be used?

Both functional and domain decomposition can be useful for other areas of the software development lifecycle.

Functional decomposition is a useful technique for discovering requirements given by clients, particularly if they are fairly woolly or vague.

Domain decomposition can also be useful for organising teams into specific areas of expertise, particularly in complex, arcane business domains (like financial services!). However, unless they’re used alongside volatility-based decomposition, they’re simply not sufficient for anticipating or facilitating change.

How do you identify volatility?

Identifying volatility is perhaps the most challenging aspect of this method.

It’s not always obvious why something is volatile, especially to customers who are used to presenting functional requirements – after all, no customer will ever point out what will change, instead they will specify what they think the system should do.

The key here is to distinguish between what is volatile and what is variable. Volatility is open ended and expensive to contain (and therefore requires encapsulating in a component), while variability can easily be handled by conditional logic.

To identify volatility, there are two questions you can ask

1. What changes across customers at the same point in time?
2. What changes at the same customer over time? For instance, what are the changes in the business context of a customer, or change in the use of the system?

By way of example, let’s go back to our simple brokerage system. Equity trades are currently settled via CREST, which is the UK securities clearing system. This means that only instruments available on CREST (that is, UK, US and European stocks) can be traded.

If a company has plans to trade in the Far East or Australia, then completely different clearing systems will need to be interfaced.

In other words, the clearing systems that the broking system settles equity trades through is volatile.

An alternative technique is to attempt to design a business for one of your competitors – identifying differences in propositions could well uncover volatility.

As part of identifying volatility, areas that are out of scope should be called out and documented. For instance, in the broking system example above, the company never wishes to offer its clients the ability to trade currencies – so this would be out of scope.

How does this all apply to the Seccl system?

Volatility-based decomposition allows us to build solid architecture that caters for unpredictability – but what does this look like in practice?
Let’s take a look at what using this method looks like for Seccl’s engineers, day-to-day:

  • Multiple clients can now be supported. No orchestration logic exists in this level
  • The transaction workflow manager handles the workflow logic for different transaction types (trades and transfers), as well as different asset types (equities, funds, etc.)
  • The settlement engine contains business logic for the various stages of the transaction’s settlement cycle
  • The resource access layer shields the business logic from any storage volatility. This allows transactions to be held in a different database in the future (transaction resource) and handles volatility in clearing system (settlement resource)

Are there any downsides?

Nothing is all good or all bad, and just because volatility-based decomposition is our chosen engineering method doesn’t mean it doesn’t have its flaws.
By far the biggest downside of volatility-based decomposition is the length of time it takes to convert from a functional or domain mindset – this shouldn’t be underestimated!

It’s also a fairly niche approach, and not one that would typically be taught at college or coding academies – hence why you may not have used it before. Volatility-based decomposition takes time to learn, and you must have space to experiment before trying to embed your understanding. We know it’s not easy!

When Dave first started Seccl, it took him 6 months to apply volatility-based decomposition to our architecture – but he’ll tell you it was worth that time and effort.

For a business with global ambitions that exists in an ever-changing regulatory landscape – that caters to a variety of different clients, all of whom have a plethora of investor and trading styles – having an architecture decomposition that can allow change to be easily accommodated is not optional. It’s essential.

Find out more

Related reading

  • Permission to launch: the what & how of platform permissions

    Permission to launch: the what & how of platform permissions

    Read now
  • The Seccl SIPP: an introduction for investment platforms

    The Seccl SIPP: an introduction for investment platforms

    Read now
  • Launching a new investment or financial advice fintech

    Launching a new investment or financial advice fintech

    Read now

Get in touch

Ready to get started?

If you want to find out more or kick off a conversation, then get in touch – we’d love to chat

Is it OK if we email you every now and then with news and updates you might be interested in? You can always unsubscribe later if you like.

By submitting this form, you agree to our Privacy Policy and Terms.