Native Visual Analytics for Regulatory Compliance
Native visual analytics accelerates regulatory compliance in the financial services industry because it provides subject matter experts with multifunctional accessibility and transparency needed to address heightened expectations of regulators. This page describes the concept of native visual analytics and provides examples of its application to specific regulatory challenges.
Subject Matter Experts Drive Solutions
For the purposes of this page, subject matter experts are the end users who understand the nuances of how to derive meaning (and thus value) from data. They include compliance officers, front office traders, risk managers, finance analysts, back office operations, senior management, and anyone else that influences strategic decisions based on their use and understanding of data.
Native visual analytics refers to the ability to run analytics native to where the data resides. This capability is what differentiates native visual analytics from traditional business intelligence and visualization tools. In the native case, data stays in place and provides high definition access to all granular data in its native state. It is a simplified, in-cluster and in-cloud architecture that runs analytics directly within Hadoop nodes, cloud instances, or other scale-out modern data platforms.. Even if you are working with multiple data sources, the analytics and visualization engines are distributed at each source, simultaneously. In the traditional BI case, data sources are moved to a separate place before the analytics are run. Analytics done after the move are dependent on getting that move right in a timely manner and on the first try. The experts are waiting to make decisions, not driving the solutions.
The “native” aspect of native visual analytics is what empowers subject matter experts to drive solutions because they can run analytics where the data resides and build their own data apps that are native to what they need and when they need it. It shifts the paradigm from passive analytics to data applications
Core Concepts Critical to Suffice Regulatory Requirements
There are three critical core concepts to sufficing heightened regulatory expectations in the financial services industry: a) granular first; b) leverage all data; and c) self-service.
- Granular First: Data granularity is maintained so that you can model the story from the ground up, ensuring that it is there when you need to drill down again. This suffices two regulatory requirements: modelability1 and fidelity2.
- Leverage All Data: The data makeup of financial products and activities are both unstructured (legal parameters and electronic communications), structured (quantitative values), and can be in real time or recorded in the past. Native visual analytics is agnostic to data type and its real-time and historic analysis capabilities are required to articulate the full story to regulators (i.e., electronic communications of pre-trade intent is needed for DFA Trade Reconstruction along with tracking the lineage of post-trade activities).
- Self-Service: Self-service analytics with zero data movement enables business and regulatory analysts to adapt to evolving regulatory requirements through on-demand access to multiple data sources and by creating data apps that they can revise and edit as needed. Subject matter experts can join trade data from respective sources on-demand and enrich their analysis further by connecting alternative data sources as needed.
Native Visual Analytics Applied to Specific Regulatory Challenges
The table below provides examples of specific regulatory challenges and how the capabilities of native visual analytics enable holistic solutions.
1Modelability refers to the ability to prove that financial models and reporting are derived from real data. In the case of Basel Fundamental Review of the Trading Book, a modelable liquidity analysis is one in which every product has 24 or more pricing observations throughout the year with the time between each observation no longer the 30 days. If the data is not modelable, then capitalization charges go up (non-modelable risk factors). You need granularity first to prove this.
2Fidelity refers to the accuracy and exactness of data. The integrity of underlying data erodes when source data is moved, resulting in data loss and/or transformation.