• wblogo
  • wblogo
  • wblogo

What RegTech means to the asset management industry today – we talk to two experts

Chris Hamblin, Editor, London, 31 May 2018

articleimage

Todd Moyer, a 20-year veteran of the financial technology scene, and Katie Kiss, a project manager at Confluence with an asset management background who focuses on regulatory themes, explained their vision for the interplay between regulation and information technology to Compliance Matters recently.

The pair started by mentioning the expansion of regulation in the US that has taken hold this year, adding that they were experiencing it ‘in real time’ and that it was bound to have a knock-on effect on regulation elsewhere in the world. They were also keen on RegTech, the explosion of regulatory IT that is transforming regulation at the moment. Moyer commented: “I think that term was introduced by the Financial Conduct Authority back in 2015. I was at the Global Fund Forum in Germany and that was the first time I had heard the term.”

What’s it all about, ALFI?

Both had just been to Luxembourg to attend the ALFI (Association of the Luxembourg Fund Industry) conference. Moyer said: “It's interesting to me that when I look at the life cycle of regulatory change and really just in general change in our asset management market, and what really was at the top of the agenda was constant shifts we were seeing, the speed of change and what topped the agenda was still regulatory pressures. We've seen what that looks like and also there's been quite a bit of conversation about the uncertainty around Brexit and what that means."

Katie Kiss said: “At the conference I was shocked to hear about the number of new regulations we have seen and someone was actually talking about an iceberg effect. If we look at European regulation since 2010, we have had 49 new directives and with that, what's more important, the level 2 and the level 3 underneath those directives that are in the region of 800. What that means to the industry and what that means to our compliance is obviously the sheer volume of information they need to consume and the complexities that those regulations are bringing and how they interpret those regulations.

“They were also mentioning MiFID [the European Union’s second Markets in Financial Instruments Directive] in Europe, saying that there have been more than 20,000 pages of documents that someone in those organisations had to read. And obviously that has a knock-on effect on us as well. As part of our job and as part of my job as a product manager we do look very closely at the regulations and what they mean and how we interpret them. That was one of the main things discussed at the conference but [it] also [looked at] how everything is shifting, so we've seen the AIFMD [the Alternative Investment Fund Managers Directive] taking effect in 2014.”

Revisiting and recalibrating

Referring to Undertakings for Collective Investment in Transferable Securities or UCITS, she went on: “Typically, there is a cycle when these regulations are being revisited and in Europe it's not necessarily new regulations that are coming out, it's more revisiting and learning and maybe changing some of the...reacting to the lessons learnt. So AIFMD was 2014 and we're actually seeing that the European Commission now has commissioned KPMG to revisit that and see how well...whether [the] AIFMD had the right impact and we're expecting something to come out in 2018. [Judging from] two comments that were made during the conference, we will indeed be seeing changes, probably in the UCITS world, similar to what we saw with the AIFMD.

“I'm thinking that if the AIFMD came in 2014 and they're revisiting it now, probably a UCITS V revamp will not be far away. Similarly, with MiFID, it 'went live' at the beginning of January. Already we've seen a lot of noise [sic] about transaction costs and the implications of how asset managers ought to calculate that cost.”

Miss Kiss went on to suggest that the costs had been grievous and said that she had heard talk about the EU coming out with MiFID III. Another example of updates that she noted was the fifth Money Laundering Directive (not yet passed but being debated vigorously) which is about to come hard on the heels of the fourth in almost a seamless transition.

Lessons to be learnt

When asked whether this European routine differed from the US regulatory environment, Todd Moyer, who thought that the financial crisis that came to light in 2008 was now over, said: “Post-crisis we're seeing the focus on systemic risk reporting and when we really think about what's happened in the US you look at money market fund reform happening in 2010 and now you're seeing money market reform hitting in 2019 in the EU and so when you think about what's transitioned there – and we're talking in the US about several hundred money market funds – the impact was significant but not overwhelming back in 2010 and then as you look to Europe, there are several hundred market funds that are going to take hold in '19.

“What we've seen is a massive transition in the US over the last 12 to 24 months as it relates to the 1940 Act mutual fund world space and there are over 12,000 mutual funds affected by what's called SEC modernisation, also termed N-PORT N-CEN, which are the actual filings that are required [by] the regulators and really what we've found is the absolute disruption that that has caused in our market and in the US market. We are in an interesting position to assess and understand how the data's being consumed, how they're managing their data. The biggest lesson [we've] learnt is to start early.

N-PORT and N-CEN

[Editor’s note: The US Securities and Exchange Commission (SEC) has developed N-PORT (Portfolio) and N-CEN (Census) as new forms on which public 1940 Act mutual funds and exchange-traded funds (ETFs) – named after the Investment Companies Act 1940 under which the SEC regulates them – must send it reports. Their predecessor for private funds is Form-PF. As with their forerunner, they aim to allow regulators and the investing public to know more – in this case, by capturing data using XML-based disclosure schema and validation that lends itself to comparison between funds, and can be used for predictive analysis by the regulator’s examinations unit. N-PORT is designed to do this at a portfolio level every month, keeping track of the debt involved in derivatives’ and convertible bonds’ usage and exposure, shadowing banking and other activities that incur liquidity risk. The SEC has proposed to ‘bucket’ these exposures and make them public so that shareholders can see them.]

Get your data in order and start early!

Moyer continued: “If you believe that this is ultimately going to hit the UCITS space, there are more than 30,000 UCITS registered currently. The impact will be what's called 3X - even what it was in the US. I think there's a real translation if you follow history as it relates to how these things pan out. So we really have seen a tremendous shift towards data because of that. And so I think that's the biggest lesson learnt right now. Get your data house in order and start early...if you believe that this is a transition that is coming to the UCITS space.”

When asked whether the data-related challenges that the SEC’s modernisation programme posed were a question of sheer volume or whether there other challenges came into play, Moyer opted for the latter: “It's a number of data sources as much as it is the sheer volume. Just as an example, the number of data sources required to create an annual financial statement is maybe 60 to 80 data sources. When you begin to think about systemic risk, you're talking about 30+ data sources. So you have technological challenges where you have nowhere to put this data.

“You're bringing in third-party data [i.e. data that comes neither from the financial institution nor from the customer in question] that might be in unstructured formats such as spreadsheets or other files. As an organisation, or as an asset manager or service provider, how do you handle that data? What do you do with it? How do you make sense of it? How do you put it into a data model that enables you to use that data across regulatory challenges? So I think there's been a real sea-shift in what it means for getting your data in order. You have to ask what that really means to each asset manager and what it means to service providers and how are they actually using data as an asset.

Another interesting phenomenon for financial institutions that have had to deal with SEC modernisation has been the speed at which they have had to work. Moyer said that the days of one-off manual strategies that used Microsoft Excel are over.

You have to trust the technology...

“We've passed that tipping point, in my opinion, where you really have to trust the data and leverage it as an asset because you don't have time to manually tick and tie [i.e. to make sure that every item on a ledger or in an inventory is accounted for and properly connected to other items to which it is related, but the expression can also denote the simple act of double-checking things, according to cheesycorporatelingo.com] the way you previously had. In the US, the requirement is every 30 days. It's just not possible for humans to go through and manage such a massive amount of data in such a short period of time. So you really are beginning to trust the system. You have to trust the technology.”

...but can you trust the FCA to prescribe outputs?

Near the end of February the British FCA asked the regulated public to talk to it about the part that IT can play in regulatory reporting in a characteristically garbled message that mentioned "launching a call for input." Its policy on the subject has been covered elsewhere in Compliance Matters. Todd Moyer and Katie Kiss seized on this as a good example of regulators' desire to press on with the machine-readable, machine-executable translation of regulations.

Moyer was enthusiastic: “We really look at the world moving towards digitalisation. That's going to have to leverage some of the AI and some of the machine learning items, but really it's a move away from output-focused reports towards data-focused reports. Getting that into a machine-readable format is going to be where we see everything going, where today there is still an 'output' view of the world where you focus on the ‘output’ you need to get out there to an investor, to a regulator, as opposed to [concentrating on] getting the data right.

“We've seen the move towards standardisation...on the output side. So where you traditionally felt that a document that investigators would produce was a bit of a marketing document or more of a specific output, we've seen a real move towards coming up with a standard. Really, the only way with automation is to create some standardisation, both on the input side and on the output side.

I think the regulators do that very well. They prescribe an output.

Machine readable, machine executable

Katie Kiss expanded on the FCA’s "machine readable, machine executable" policy. In her eyes, the machine-readable idea was to compel each asset management firm to make a machine read a regulation and interpret it. The machine-executable idea was to ask the firm tomake the machine execute the order that it needed to submit to the regulators. She thought that a standardised data format would make it much easier for the firm to put this into practice.

She added: “I think in my view what happened a couple of years ago, when these new regulations were coming out and firms were really challenged in terms of implementing it, is that everyone went and put the technology in place as a point solution [i.e. a way of solving one particular problem without regard to related issues]. No-one actually talked about data in a strategic way. I think we've seen that shift now, with obviously the FCA coming out, and there are a lot of tech labs happening around every single country where the regulators are setting up more and more round tables and discussing how can the regulator also help.

“In my view...probably the future will be...if you sort out your data strategy, the regulator will be able to grab the data. You won't have to push the data to the regulator, the regulator will be able to come and take the data they need for those specific regulatory requirements and that's where I think Confluence will have a massive role to play because of our whole DataTech concept and how we're designing that.”

Making sense of unstructured data

Returning to the theme of standardisation, Moyer was optimistic: “That's really where groups like ours come in with technology to help handle and structure data, so we as a firm are focusing on bringing unstructured data in and making sense of it and really normalising it. I think technology is going to play a huge role in that data standardisation [and] data normalisation, and that's going to feed into the data strategy of the asset management space. So each asset manager is going to have significant needs for their data strategy beyond the back-end reporting piece, which is what we're talking about here today. It's going to be much more around how do I leverage data as an asset and how do I leverage data throughout my whole asset management space, not just specific to how do I manage it for...reporting, and that's an important thing but it's a component of your overall strategy.

“I think that when we're talking about some of the marketing documents and some of the investor communication documents you're still going to see some uniqueness to that. I think it's a journey towards getting to a digital output. Digitalisation doesn't happen overnight. There's still a real demand to see some printed documents but we see that going away over time.”

The Semantic Web and Techsprint

People are working on the use of the Semantic Web – an extension of the World Wide Web through standards evolved by the World Wide Web Consortium (W3C) – to solve these problems as well. The standards claim to promote 'common' (presumably this means "used by more than one person" rather than "vulgar") data formats and exchange protocols on the Web, most fundamentally the Resource Description Framework (RDF). According to the W3C, "the Semantic Web...allows data to be shared and reused across application, enterprise, and community boundaries". It is therefore regarded as an "integrator across different content, information applications and systems."

In the context of the Semantic Web people often speak about ontologies as ways of defining and classifying things in relation to each other. There is actually no such thing as a 'semantic ontology,' but this is the term they use. The FCA’s ‘Techsprint’ meetings include some bodies that only handle semantic ontology, along with a kaleidoscope of other ‘RegTech’ experts. Confluence is now a member of the FCA’s ‘round table’ on the subject.

Compliance Matters asked Katie Kiss where she thought that the FCA consultation process was going to end up. She did not claim to have a crystal ball: “Well they're asking the industry for something special in terms of whether it's a risk or not a risk. It can go either way. I think it will be interesting to see what the input from the different asset managers and the technology providers is going to be, but I think that the main objective with that is to assess having a machine-readable regulation and to ask ‘is that risky?’ Are we putting [i.e. taking on] too much risk expecting the machine to interpret that regulation?” She declined to venture an opinion about whether such activity was indeed too risky.

“I don't have necessarily a very strong opinion. My strong opinion is on that second bit which is the machine executable. I think that's definitely where we need to be. Five years down the line, I would love the regulator to be able to come and have the data they need for the regulation as opposed to firms having to take on the burden of pushing documents towards the regulators. The end objective for the regulator is really to be in that position where they can go and grab the data."

Third-party data

Compliance Matters asked Moyer whether the term ‘third-party data’ tended to cover data from the public domain, lifted from open sources. He agreed: “For the most part, but not always. A lot of it is maintained in third-party collateral information. Is it public? Yes, it's public, but is it hard and difficult to source and...that's the bigger challenge, I think. It's not that the data's not available, it's how do you actually pull the data in and make sense of it, so that's where we see the technology making sense and then the machine-readable and where we're spending a lot of time on technology is that actually when it comes in, learn from that data and rationalise it so the next time you're not seeing the same challenges over and over, so there's going to be a lot of that as well.”

What is DataTech?

Confluence has coined a new term – DataTech. Todd Moyer attempted to explain: “Similar to SupTech, which I haven't heard of, you may not have heard of DataTech. For us it's an evolution, not a revolution. It's an evolution starting with post-crisis change and other changes evolving into RegTech and RegTech involving into DataTech. It's really focused on data-driven processing as opposed to output-driven processing. The industry has always focused on what's the output. This is a move and a shift towards data-driven processing.”

In computer programming, data-driven programming is a programming paradigm in which the program statements describe the data to be matched and the processing required rather than defining a sequence of steps to be taken. Moyer went on with his explanation.

“This is as much of an operational change as it is just a technology change and for these organisations to undertake a view towards data first and some of the things we've been talking about around data re-use, and really not focused just on how do I get this report or this filing out, how do I use data as the core vehicle to enable their overall process to change, their operations to change, and so it's trust the data and leverage it for multiple outputs. That's really what we're talking about when we're talking about datatech.

“Regtech solved individual solutions, point solutions, solved the outputs and that was very necessary during that phase of the evolution but as we're going into, you know, understanding how I can do things more efficiently and how I can actually leverage my data, we're really focusing on the data first, enabling the data to be the focus, not the output.”

When asked to give an example of how a strategic approach to using internal data might make operations more efficient, he was careful not to name any names: “I recently worked with a large global service provider and they came to us to meet the SEC modernisation requirements. They were looking across their data saying I'm trying to source data for my global regulatory needs and I have a requirement of 70 different forms that I report on on behalf of an asset manager globally. They looked at some of the banking regulations [to do with] reporting, [including those of] the Bank of Ireland the Luxembourg CSSF and the Banque centrale du Luxembourg, as well as...SEC modernisation.

“They said although I'm looking to...solve this, I realise that there's a cross-pollination of data across the platform. How do I use that data once, bring it into my application and just confirm it one time? And then allow it to cross-pollinate across my various reports? They were looking to use data to meet their clients’ needs more efficiently and to be able to offer reporting globally where a lot of times they weren't able to because of the challenges of sourcing data, the challenges of consistency across the board.

“They were telling the asset managers to handle regulation on their own. The asset management community has come back and said to a lot of these service providers: we want you to do all of it. In that case, they were able to meet the needs of the 50+ reports on a single platform and that's really the evolution instead of just looking towards a single report and solving for that. They're able to then solve across the board for themselves.

“I don't think it's going to be a long time until you're really 'hands off' as it relates to this reporting, so the data should flow in and the data should flow out and technology can help support that transaction.”

When asked whether his firm’s motto was “take care of the data and the outputs will take care of themselves,” he readily agreed.

Latest Comment and Analysis

Latest News

Award Winners

Most Read

More Stories

Latest Poll