Compliance Data Management: the Case For Automating Data Preparation ( 1 of 2 )

Author: Wavicle Data Solutions


Why the cost of financial regulatory practices is out of control

 

The high price of regulatory compliance

 

Regulatory compliance is projected to cost global financial services companies $180.9 billion this year, according to a report from LexisNexis Risk Solutions. Personnel costs are estimated to be 60% to 80% of this cost.

 

Though a variety of factors can be blamed for this, a significant driver that stands out is the extensive labor force banks rely on to compile and validate data for compliance reporting. These are endless tasks that demand ongoing development to incorporate new data sources and modify reporting requirements to reflect changes to your business, the markets, and regulatory mandates.

 

The only way to get ahead is to increase automation and simplify the process of organizing all this information. It requires a content-specific data mart or warehouse focused on compliance, where information is organized so it can be rapidly and effectively modified to meet business and regulatory demands.

 

 

The problem: time-consuming manual compliance management and data preparation processes

 

Whether it’s Basel Committee on Banking Supervision (BCBS 239) or stress testing, banks are required to aggregate data across business lines, asset types, and regions to measure and report their enterprise-wide risk profile multiple times per year. 

 

It can take months and many people to gather this data from dozens of systems (retail, commercial, loans, credit cards, investments, etc.) and ensure the data is accurate for risk reporting. 

 

Much of the problem lies in compliance data management and preparation – the process of gathering, cleansing, transforming, and validating raw data to prepare it not only for internal reporting and analysis but also for regulatory assessments. 

 

Most banks have developed some type of data warehouse or data lake as a single location to capture compliance-related data from all these systems. But as they launch new products, acquire other banks, or add new markets, they have more data to ingest, which requires more data preparation. 

 

For a variety of reasons, many banks today still struggle with complex data preparation tasks, which can lead to slow data integration and poor data quality:

 

  • Legacy systems: Core banking systems that have been in place for decades can compound the difficulty of compliance reporting.

    Not only do they store data differently than modern technologies, but they often contain layers of complex integrations that have been deployed over the years to improve or expand their functionality.

    Complicated programming is needed to draw data from these systems and deliver it to modern, digital technologies for compliance reporting. And as reporting requirements continuously change, more programming is needed to get the right data to satisfy them.
  • Disparate data sources: Each of a bank’s many systems create and store data with varying taxonomies, formats, and schemas, which must be reconciled before being loaded into a data repository (such as a data warehouse or data lake) to be used for compliance reporting.

    Most banks still rely on manual coding to map the data from source systems to appropriate target databases, which is a labor-intensive, expensive process.
  • Data lineage unknown: This is one of the most common problems we hear about time and time again. By the time data makes its way to a data warehouse or compliance reporting solution, it can be difficult to know where the data originated, let alone to see any changes or transformations that have been made to it, and how and where it flows within the organization.

    Without insight into the data’s lineage, it can be very difficult and time-consuming to identify and correct inaccurate data in the reporting cycle.

 

Perpetuating delays with more and more human resources

 

To remediate these data quality issues, institutions often depend on a large labor force to develop code to integrate data from multiple systems, validate the data, track down the sources of inconsistent or inaccurate data, and fix the data.

 

It’s stressful, time-consuming work that leaves many in the industry feeling overwhelmed, tired, and unsatisfied.

 

“Some institutions have resorted to establishing vast data-remediation programs with hundreds of dedicated staff involved in mostly manual data-scrubbing activities,” reported McKinsey & Company.

 

The problem with this approach is that it is expensive, unscalable, and unreliable.

 

As regulations change, the compliance burden for banks will only increase, further driving up demands for data and in turn increasing the need for resources and cost.

 

 

Automate data preparation to accelerate compliance 

 

The current solution isn’t completely broken, but it leaves a lot of room for improvement. Not only is it expensive, it’s also slow.

 

As compliance costs have grown, “banks are now seeking to improve the efficiency as well as the effectiveness of their compliance departments,” and are pushing for “greater levels of automation throughout the end-to-end data life cycle.” (McKinsey)

 

We see data preparation as a key area for firms to automate to limit the need for manual remediation and reduce the time it takes to create compliance reports. Specifically, two areas of data preparation are prime opportunities for automation: data ingestion and data quality. 

 

The process of building data pipelines to ingest data from source systems into a data warehouse or data lake can require many weeks of complex coding and development. Likewise, data validation is a very manual process, relying on a team of people to review reports, validate results, and track down the source of any erroneous data – combing through tables and columns within many databases to find and remediate errors. 

 

In the next installment of this blog, we will take a deeper look at the potential to automate data ingestion and data quality processes as a way to drive clean data into compliance reports and significantly reduce reliance on large teams to get the data right.

 

Check out the second blog now: Compliance Data Management: Data Preparation Saves Time and Money