Orchestration Technology & its application in solving Compliance challenges

In Sphonic’s last blog post, Riten Gohil discussed the important role “orchestration” is now playing in Regulatory Compliance, especially with regard to AML & KYC Compliance. In the article he spoke about how orchestration platforms have emerged in recent years as the driving force for many strategies surrounding identity and business verification. The argument discussed in the opening of the article about how identity might be broken is an intriguing one, especially if you consider the myriad of technologies that have appeared over the years to tackle the problem.

Orchestration platforms can play a key role in piecing together a strategy that works best for each and every firm that needs to develop an identity strategy, which is as robust as possible for their requirements and meeting regulatory obligations.
Orchestration platforms are also a great future proofing solution for any compliance strategy as they allow for flexibility and changes in regulations and new technologies. In this article we are going to explore some of the practicalities of orchestration and also highlight the key differences between orchestration and ‘decision engines’ as well as the practical link ups between orchestration, decisioning and CRM systems.

Connecting to APIs

We often hear that orchestration is “just connecting to a series of APIs” and in its most simplistic form, it is easy to see why many come to that initial conclusion. After all, orchestration involves making a request to a myriad of 3rd party APIs and consuming the data returned. There is a lot more to orchestration, however, and we will explore some of the key components in this article and differentiate between technologies that orchestrate and those that ‘aggregate’.

Big Data

By having access to a myriad of 3rd party data sources there is a big tick in relation to “accessing big data”. But what does this actually mean? The key is to use an orchestration engine to access & process only the data of significance required to make a decision and meet regulatory requirements. To us at Sphonic ‘big data’ is not only about having access to a large pool of data but extracting and processing data of importance (from the larger pool available) for the purpose of both decision making at that point in time and building up an on-going and auditable knowledge of your customers. It’s not simply a one-time action acquiring lots of data. It’s targeted use of data to build an on-going and evolving knowledge of your customers.

So, you have connected to some API’s.

  • What if an API “times out”?
  • What if an API does not return the complete data you need?
  • What if an API call throws back anomalous data or items of concern?
  • Do you simply decline or mark for review or are you able to clear up such anomalies during the same synchronous process?

Workflow Management

Remember not all vendors will return responses at exactly the same point in time so data needs to be held in-flight and augmented. Routing strategies may need to be deployed to call specific in-country data sources or additional services where incomplete or anomalous data is returned in the primary flow.

Besides capturing data, do you need to compute additional points of interest within workflow from both an analysis and audit perspective? The latest JMLG guidelines in the ‘Approach to using Data in Electronic Forms’ (sect 5.3.33 & 5.3.34 pg 80) states “the firm must be satisfied that the identification method, data/source is reliable and has integrity, secure from fraud & misuse.” At Sphonic we ensure we capture not just the results but the sources and strength of data to stream for reporting and our clients’ policy requirements, namely “do we need more data?”

Guidelines talk about both a risk-based approach and an understanding of the characteristics and evidence of identity.

Using (and proving you have used) a strong reputable source (say Government or Credit Bureau data) when determining that an identity resides where they claim and that their DOB is verified is key. If you can deploy this alongside a risk-based approach within the same real time process, you have a robust end-to-end process. This is even more important given the fact that digital identities change in relation to devices, phones and locations.

Orchestration vs Decision Engines

Above I alluded to the fact there are differences between orchestration and decision engines. Decision engines receive data and run rules or scorecards against data sets. More often than not agents can adjust rules and scores themselves, however, orchestration engines run rules surrounding what data is needed (“data acquisition” strategies). Here is a key difference, as very rarely do you need to change data capture rules or at least not in terms of the same frequency as scoring rules.

So, when would you change?

  • Changes in regulation meaning enhanced data is required
  • Changes in locations i.e. expanding into new markets
  • Changes in vendors (3rd party data providers)
  • Consuming new innovation solutions via APIs (this will constantly happen)

 

With all 4 points raised this is where top end orchestration engines do their jobs well by seamlessly adding in new data points with no changes required client side. It simply becomes a case of connecting to a single API endpoint and receiving back responses in a standard format irrespective of the format each vendor supplied, as orchestration engines are designed to normalise data where necessary.

All that said, high-end orchestration engines also provide the decisioning capability or the ability to map data captured and computed into an existing decision engine with a client’s incumbent data and scoring model. Furthermore, there is a role for both to co-exist, we created the Sphonic stack to be consumable by decisioning platforms so that new and additional contextualised data points could aid a decision engine to make smarter decisions. Inevitably this can help to reduce false positive which are often a challenge with such tools and particularly for Machine Leaning based engines, which may not have a base level of data which they ordinarily require to be effective.

Understanding Data

Another not insignificant item – typically when using multiple APIs, data (and low-level data) needs to be “stitched together”

BUT what about hundreds of disparate data points and response codes available?

The key is to focus on non-overlapping data. After all you cannot simply find an individual on a single data source from one supplier and then the same source from an alternative supplier.

Any orchestration engine worth its weight should enable “stitching of non-overlapping” data points to complete a full KYC picture whilst also re-routing in the same synchronous call to additional service APIs to capture missing data and further augment from there. This becomes increasingly important in relation to taking the recommendations from JMLSG where the old language of 2+2 in respect of Electronic Verification no longer exists. If you only get a single data source match it doesn’t mean the Identity process is weak, when augmented with other data items and a score of the veracity of the data source along with quality anti-impersonation processes, it allows a firm to be compliant and maintain a decent level of customer experience along the way.

Using Data

Besides real-time data and results, orchestration engines should also be used to initiate further actions, be it real-time “call to actions” i.e. “document validation is now required” (if your Electronic Verification result wasn’t successful or provided a weak outcome) or routing data for analysis, reporting, back office case prioritisation or case investigations. Each data point captured within the initial orchestration exercise can be used for key additional process requirements as part of a wider AML process, strategy and policy delivery.

Managing Asynchronous Processes

In some countries, regulation requires the capture and verification of documents. In others, documentation may be needed where a sufficient level of electronic identity cannot be attained, or comfort provided. Additionally, clients may decide to invoke such changes where a high level of risk is present or identified real-time in the initial orchestration workflow. Orchestration engines must also therefore be able to initiate additional asynchronous processes and join up the subsequent results to build up a complete and regulatory compliant picture.

 

Sphonic pioneered Orchestration in the Digital Risk Management space back in 2012 and now into our 9th year the system has further evolved based on the learnings from our ongoing client work, the innovation in data and advances in technology. Do get in touch if you would like to learn more visit www.sphonic.com or drop us a line at info@sphonic.com.