26 June 2018

Transforming the impact of financial information: the role of technology

Are traditional metrics still relevant?

On 5 June 2018, CFA Institute and the IFRS Foundation hosted a panel debate at the Guildhall in London, asking whether traditional metrics are still relevant amidst rapid technological advancements. Moderator Jane Fuller captures some of the key points made during the discussion.

Related Information

If investing is as much an art as a science, any debate about the impact of technology is bound to prompt as much fear as hope. The hope is that technology will help collect, sift and analyse the billions of bits of information that exist on companies and other assets, and on their customers, economic activity, trade flows, the weather—anything that influences stock, bond and derivative prices. The fear is simply that automation and machine learning, via artificial intelligence (AI), will replace us.

The Guildhall debate provided stimulus for both types of reaction.

Hopes

Quantitative analysis has been around for decades, enabling a statistical and modelled approach to investing that rules out human biases. The escalation of technological developments in data collection and analytics, including AI-driven back-testing against price movements, has ushered in a new era in systematic investing.

Data can now be collected on customer behaviour by tracking their mobile phones, on trade via sensors in ship containers, on the movement of goods in and out of warehouses. The latest analytics can then search for past price correlations and use these to predict future movements. It is an edge in active investing that can be quickly eroded, but which can also move on quickly to the next set of unstructured (and therefore previously underused) data.

For a systematic investment operation, this relegates traditional financial information to a smaller part of the relevant data pool. Its continuing value lies in its structure, notably standardised definitions, in its comparative reliability and in the long run of evidence on links to price movements. New sources of data are tested against the traditional variety.

Those new sources include monitoring the tone of voice of executives at analyst meetings (CFOs are apparently more reliable than CEOs), and could move on to detecting emotion via tiny facial changes in video clips.

For auditors, testing of management's figures will be able to  take in the whole set rather than samples. The application of AI to the analysis of all this means that the anomalies and outliers thrown up by the process are more likely to be genuinely worthy of investigation.

The prospect of machines doing the grunt work while humans investigate the results is an appealing one. But a gap remains between theory and reality, and the issue of data quality and integrity persists—how clean are the inputs?

Preparers of accounts are focused on automation, particularly in drawing together data from different sources. Here the elimination of humans helps as they are more prone to errors (and to manipulating the data) than are robots. The acceleration in the process is also important and suggests that the time taken to report financial information to investors can also be shortened.

As for trading, that is already highly automated. Humans cannot compete with the speed of algorithmic reactions. This means that much buying and selling is already dictated by technology, which is cheap and efficient. But it might make flash crashes more likely, either because of a systems failure or because of similar program designs, creating automated crowd reactions.

Fears

Technology-driven price crashes are one of the dangers that exchanges and regulators will continue to grapple with, although humans may welcome the chance to take advantage of these dislocations (if they can get rapid enough access to the distorted prices).

Will the importance of data analysis and software programming make the professions of financial analysis and accounting more or less attractive? There is a concern that, as with the statistical emphasis in economics, people interested in the human aspects of the subject will be put off. ICAEW integrates training on technology into its audit courses, but understanding business is still more important than understanding the details of programming.

Where does this leave fundamental analysis? Standardised information has always been the analyst's friend, enabling comparisons over time and between asset classes and their constituents. Important questions include whether more data means better data, and whether human biases are baked into the algorithms sifting it.

Standardised definitions remain important and the International Accounting Standards Board's project on the primary financial statements will produce more sub-totals, possibly including operating income and earnings before interest and tax.

Given clean inputs of well-defined data, the outlook for robo-investing—from smart stock screens to dynamic rebalancing of portfolios—is tantalising, as illustrated by the advance of automation in exchange-traded funds.

But with so much of the technological approach related to price movements and gaining a relative edge, the fundamental question is who is setting prices with an eye on fundamental value? All the developments of recent years have done nothing to allay fears of asset price bubbles.

So the role of humans is crucial at least in exercising scepticism about the gee-whizz claims for technological advances. How will the data scientists incorporate that into their algorithms? 

Hosted by CFA Institute and the IFRS Foundation at the Guildhall, London, 5 June 2018

Speakers:

Moderator: