Photograph of binary code being swept away by a brush to illustrate article by Paul Mullins of AlgoMe Consulting about data efficiencies in asset management firms

Author

Paul Mullins

Paul Mullins

Partner

Operational efficiency for firms: Clean your data…but only when you have to

Posted by Paul Mullins on 5 June 2024

When it comes to operational efficiency, or inefficiency, why do a job three times over when you could do it once instead?

You wouldn’t get showered three times in a morning would you, so why would you allow your company to clean the same data three different times a day?

The reality for many right now is, if you were to review your operations, it is highly likely there is room for improvement and plenty of opportunity to make efficiencies.

In this blog we are going to cover the key considerations for managers when approaching the most efficient ways to organise and clean data. This is especially useful if you are not sure what to do about reducing inefficiencies, or you are unclear about the benefits of changing your processes and how that can save money for your business by only doing key tasks once.

We understand at first glance this may not be a priority, but we also know most Asset Managers have developed their operations over a long period of time. We see these overgrown departments, siloed functions and third party relationships creating a complex network of data.

In many cases the operational units have become beasts complicated by rapid growth, technology change and often mergers and acquisitions. As new functions are added to the business, often this is done independently of other existing business departments, creating siloes, and there is little or no attempt to find out how existing business units can be leveraged.

If you add a trend of outsourcing various operations, to this melange, then it can result, over time, in having multiple data sources, several sources of truth and many reconciliation processes to tie each data source to one another internally and externally.

As a result, we often see managers needing large operational units focused on data gathering, cleansing and data manipulation, and usually focused on one specific area of the business. For example, some managers maintain a separate and distinct investment book of record (“IBOR”) from the accounting book of record (“ABOR”) and managers with legacy business or multiple administrative agents may in fact have multiple versions of these.  We also see managers who need to combine IBOR and ABOR to enrich data.  In many cases we see this type of process being carried out several times, in different departments, for different end purposes.

If these situations sound familiar, we have provided some key considerations about your data models which will be of guaranteed interest.

Key considerations:

The considerations fall under four areas; understanding your current data models, creating a single source of truth and data standards, creating efficient use of data and coordinating the use of systems between departments. These should be interrogated as follows:

1.   Understanding your current data models

  • What operations are being covered by each data model?
  • What overlap in data exists between data models?
  • What infrastructure and technology are being used to house and manage your data? Is it consistent across all data models?

2.   Creating a single source of truth and data standards

  • Who in your organisation is responsible for data? does your organisation have a Chief Data Officer?
  • Do you have a centralised or decentralised data model?
  • Have you created a set of data standards and definitions?
  • Have you defined the entities, their attributes, and relationships in your model?

3.   Creating efficient use of data

  • Is your data normalised in some way?
  • Is anyone responsible for ensuring that existing data is not duplicated, and that “new” data is actually new and not just another form of the same data held elsewhere?
  • Do you have routines for ensuring each department is tapping into the same data sets as required and can rely on data accuracy via upstream validation processes?
  • Where data is being transformed through calculations, has your organisation thought about when it is best to store calculated results versus maintaining flexibility to recalculate the output at point of need?

4.   Coordinating the use of systems between departments

  • Are different departments using the same systems to access, transform and report on the same types of data?
  • Has your organisation reviewed each of its systems to ensure that you are using only those systems that are necessary, interact well with each other?
  • Have these been correctly configured or designed specifically to complete the tasks required?

At AlgoMe Consulting, we believe that efficiently organising your data is a first step to ensuring that your operations are working effectively. As important is how you then design your operational and business teams up and downstream to make the best use of your data and systems.  If you do not have all the answers, we can help you navigate the territory.

If you would like to hear more or find out how AlgoMe Consulting can help please contact Paul Mullins at paul.mullins@algome.com.