The Gateway to Algorithmic and Automated Trading

What is an algorithm? Financial regulation in the HFT era

Published in Automated Trader Magazine Issue 41 Q4 2016

Regulators are increasingly concerned with automated trading and its potential risks. Can they learn something from the algorithm tagging rule in Germany's High Frequency Trading Act?

Dr. Nathan Coombs

AUTHOR'S BIO

Dr. Nathan Coombs is a Leverhulme Early Career Research Fellow in the School of Social and Political Science, University of Edinburgh, UK. His work is situated in a field known as the social studies of finance and focuses on the regulatory technologies introduced since the financial crisis.

For readers of Automated Trader there will be nothing mysterious about algorithms. Short strings of code, a secret sauce for trading; that's all. But in recent years social scientists have taken an intense interest in them. In 2015, the legal theorist Frank Pasquale published a book called 'The Black Box Society'. It focuses on the way that from Internet search engines to the algorithms that score our 'digital reputation', our lives are increasingly governed by opaque automated processes.

Within the academic community this has been called 'algorithmic governance'. Developing the work of the German sociologist Max Weber, who claimed that modern bureaucracies have locked us in an invisible 'iron cage', scholars see algorithms as taking society in troubling new directions. Algorithms watch, record and judge every aspect of our lives; and yet, most of us know little or nothing about them.

This is why a rule in the German High Frequency Trading Act ('HFT Act') requiring firms to tag their algorithms with a numerical code caught my interest. Here the notion of algorithmic governance was turned on its head: the regulation does not concern surveillance of humans by algorithms, but surveillance of algorithms by humans. The idea behind the rule was that it would allow trade surveillance officers to see algorithms at work in exchanges' order books. As a result, it was believed that the flash crashes and market manipulations that have proven difficult for regulators to make sense of could be more easily investigated; light would be shed on automated markets.

From my perspective, more interesting were the philosophical questions the rule raised. In particular, the HFT Act's requirement that firms identify the algorithm responsible for an order implies that we have a clear idea about what an algorithm is. But do we? I searched high and low, scouring the philosophical and computer science literature, but did not discover a satisfactory answer. My initial response to the tagging rule was therefore incredulity. It seemed like just another instance of regulatory naivety, destined to fall short of expectations.

My scepticism deepened when in 2014 I began investigating how compliance teams were responding to the rule. Since the rule came into force, some trading firms had issued only a handful of tags to represent all their algorithms, whereas others were generating thousands of tags each year. This raised worrying questions about the rule's purpose and efficacy: how could the tagging data be of any use if firms were taking such different approaches? Surely the data submitted to regulators needs to be rigorously standardised for it to help them monitor the market?

This was where I started. By the end of my investigation, I began to see how the rule might still be useful. This article aims to show why. It explains how regulators define an algorithm, why the rule has been interpreted so differently and the effects it is having on enforcement practices and the cultures of trading firms.

What is the rule's point?

When speaking with the proprietary trading industry, I encountered confusion about the purpose of the algorithm tagging rule. Some traders claimed not to see its point; others saw it as driven by irrational fears about high frequency trading promoted by the popular press. Like regulatory requirements for enhanced derivatives reporting and precise time-stamping of trades, the algorithm tagging rule was talked about as yet another example of the increasing hunger for data by regulators without them having a clear idea of what they want this data for and what they intend to do with it.

The way that the HFT Act seemed to have been driven by the failure of an earlier attempt to introduce a financial transactions tax in Germany only fuelled this scepticism.

However, German trade surveillance departments explain the rule's motivation in terms of the problems of interpreting market data nowadays. Given the sheer quantity of data, trade surveillance officers do not watch exchanges' order books manually when looking for problems or suspicious patterns. Instead, they make use of software based on pre-programmed alerts for anomalous patterns (the Swedish company Cinnober is currently one of the market leaders in this area).

Figure 01: Diagram from the algorithm tagging implementing guidelines

Figure 01: Diagram from the algorithm tagging implementing guidelines

When a set of trades triggers an alert, surveillance officers then 'zoom in' on the market data to decide whether they warrant further investigation. That is what has become more difficult with the widespread use of trading algorithms. The problem stems from the fact that although traders can operate multiple algorithms in parallel, surveillance officers can only see individual trader IDs. So, for example, what might seem to trade surveillance like a 'layering' strategy may in fact just be a trader's algorithms pursing parallel strategies and leaving behind a suspicious-looking footprint. Vice versa, real attempts at market manipulation might be hidden behind the anonymity of trader IDs.

Another reason for algorithm tagging was to avoid algorithm registration requirements or code disclosure. In Europe, the idea of algorithm registration was suggested in early drafts of MiFID 2 and the German HFT Act. More recently, in the United States, the CFTC's proposed Regulation Automated Trading (RegAT) suggests that trading firms should lodge their algorithms' code at an external data repository (see page 19 of this issue for the latest developments surrounding RegAT). These proposals have proven controversial since they potentially violate the intellectual property rights of trading firms. Moreover, they are of questionable value. Since trading firms change their algorithms on a regular basis, how would regulators be able to monitor all the code? And would those traders attempting manipulative activities disclose their real code to regulators anyway?

A final reason for algorithm tagging was discovered when regulators began consulting market participants about the rule. Regulators were happy to discover that large high frequency trading firms were maintaining documentation about their algorithms, but this was not always the case for other financial institutions. A German regulator described this finding as a "bad surprise; we did not expect so many players in the market giving the impression that they do not really know what they are doing." Regulators were shocked to discover that "nobody in the firm knows sometimes how the algorithm works. What is a decision path? I don't know. I push this button. Full stop." In light of the 2012 Knight Capital incident - in which an untested algorithm containing an obsolete function almost bankrupted the company - some firms' ignorance about how their algorithms work was alarming.

The idea of algorithm tagging was therefore a compromise solution that intended to shed light on automated markets, while avoiding the need for algorithm registration and code disclosure. But the rule compelled regulators to venture into the unknown: to implement the idea they would need to take a step into the dark and define what a trading algorithm is.

What is an algorithm?

How did German regulators do so? They did not attempt to just take a computer science definition off-the-shelf (as I discovered, the literature would not have provided much help). Instead, they began by consulting with market participants and asking them what they think an algorithm is. Many trading firms had their own definitions that they used for keeping track of changes to their algorithms, which provided a good starting point. However, after a twelve month consultation, regulators found themselves getting no closer to arriving at a single definition. As a German regulator put it to me:

"We came to the question: so, what is an algorithm really? When you discuss with market members you get a multitude of different options. So one says each IF/THEN clause is an algorithm. And the next one says my strategy is an algorithm. And so we have everything in between."

Regulators would therefore need to define what an algorithm is themselves. Given that trading firms use algorithms from everything from news sentiment analysis to discovering arbitrage opportunities, this presented philosophical and logistical challenges. How could all these algorithms be identified with a single number? In response to a discussion paper by ESMA, the Futures Industry Association get to the nub of the problem:

"Some trading algorithms are made up of a sum of software sub-parts, which are built to receive and relay data from other correlated markets, or perform other separate calculations to handle complex events… This makes it extremely difficult to identify whether the overarching algorithm is a unique 'whole' or has changed by means of its sub-parts."

Put differently, with firms making use of an assemblage of algorithms in their trading processes, how can the part be separated from the whole? Regulators attempted to resolve this problem with the idea of the 'decision-path'. They decided that they were not concerned with trading firms' internal definitions of their algorithms and only with the series of automated decisions. The rule's implementing guidelines focus only on the complete set of IF/THEN decisions responsible for an individual trading order. Now the exact wording in the guidelines is as follows:

"A trading algorithm is an EDP-based algorithm containing a well-defined sequence of instructions of finite length… [it] has to be identified [with] the entire sequence of calculation steps (decision path)… the identification obligation is referred to as a sequence of instructions and not to its individual elements, even if the latter could be considered separately as independent algorithms."

Note that EDP means simply 'Electronic Data Processing' and is a direct translation from the German 'Elektronische Datenverarbeitung' (EDV). A more sensible translation would be 'computer-based'.

The 'algorithm' firms are required to tag is a composite of firms' internal algorithms. Furthermore, the guidelines also state that any change in the sequencing of an order's parameters requires a new tag (this principle is demonstrated in Figure 01 by two different decision paths). For example, if the volume of an order is initially determined before the trading venue and this sequence is reversed, then the rules stipulate that a material change in the algorithm has taken place, requiring a new algorithm tag to be issued. A senior market surveillance officer explained the principle: "You may have a situation where you [a trading firm] have one arbitrage algorithm based on your internal definition, but you have to mark [assign separate algorithm tags to] 5,000 different paths."

The rule is an ingenious response to disagreements between firms about how to define an algorithm. But it does have one crucial indeterminacy: it does not specify what a material change in a parameter is. That is left to trading firms to determine themselves. German regulators left this open to the judgement of compliance teams because they were concerned that making the implementing guidelines too precise would lead back towards the sort of computer-code based rules the idea of algorithm tagging was intended to avoid. Just as importantly, bringing such a level of precision to the guidelines would not allow trading firms to tailor their compliance responses to their individual circumstances. Regulators wanted to take advantage of trading firms' existing skills and knowledge.

My research discovered that this indeterminacy is what has led to the diverse compliance responses of different trading firms. Since no consensus exists about what a material change in a parameter is, nor about the appropriate method for determining when such a change has taken place, creative solutions were required. Some firms' compliance teams responded with quantitative approaches.

For instance, one high frequency trading firm I spoke to had devised an automated system for determining tipping points at which a change in an algorithm should be classed as material. At the other end of the spectrum, some firms adopted a qualitative approach. For instance, in one firm compliance officers had created example sheets of the sort of changes which require authorisation, with the intention of building up traders' ability to make their own professional judgements about when a material change has taken place.

While these different approaches lived up to regulators' expectations of harnessing the existing knowledge in trading firms, it has, worryingly, also resulted in large variations in the number of tags being assigned by different firms: from the single digits to the thousands. Clearly, a firm which has generated only five tags since becoming 'compliant' is not really using such a limited array of strategies. Similarly, it also seems unlikely that the thousands of tags being generated by other firms map precisely on to how many strategies they are actually employing. These inconsistencies place a question mark over how useful the data can be.

Is the rule working?

A common assumption is that regulatory data needs to be highly accurate to be useful. Since regulators aim to monitor the market and determine right from wrong, then unless they can accurately see what is going on, surely they will not be able to do their job? At its extreme, this line of thinking implies that regulators need the sort of panoptical vision that the eighteenth-century English philosopher, Jeremy Bentham, aimed for with his prison design, in which the guards would be able to peer into every cell and watch inmates at all times. This was my assumption too when I began my research.

But after interviewing compliance teams and regulators I changed my mind. Why?

The first reason concerns how the data is being used. The idea behind the tags is not that they will allow regulators to determine instantly if manipulative activity is being attempted. The algorithm tags do not tell trade surveillance anything in particular about firms' algorithms; they serve to simply help trade surveillance distinguish between different strategies. So, even if the tags correspond only imperfectly to firms' actual trading processes then they still increase the granularity of market trade data for surveillance officers. The rule aims to simply speed up surveillance officers' decision-making about whether suspicious-looking trades warrant further investigation, not to grant them an all-knowing gaze over the market.

Obviously, the rigorous compliance responses of large high frequency trading firms provide more useful data for this purpose. Yet, even less rigorous compliance responses still provide better information for trade surveillance than anything they previously had access to. The usefulness of the tags should not therefore be judged according to an all-or-nothing criteria - either they are accurate and useful, or inaccurate and worthless. Instead, the trade surveillance officer I spoke to stressed that any new layer of data is valuable for regulatory enforcement. To understand the point, think of securities markets enforcement less like a physics experiment and more like old fashioned detective work: how likely is it that the detective would turn down the opportunity of information just because sources might be unreliable?

The second reason why the rule might be proving useful is due to its effects on the cultures of trading firms. In the past, it was possible for smaller firms to trial algorithms live on the market. If they work: 'good'. If they blow up: 'never mind', just rewrite the code. After the introduction of the tagging rule, greater operational prudence has become necessary. As my interviewees put it: the passing of the German HFT Act has begun bringing to a close the era of 'cowboy' proprietary trading. But the changes are not just significant for how smaller prop shops operate.

In the larger trading firms, which already had in place rigorous compliance procedures, the tagging-rule is changing their culture in more subtle but potentially no less beneficial ways. The compliance officers at these firms who I spoke to recalled that prior to the introduction of the rule they were not respected by traders and were frequently brushed off by them. Traders saw no need to interrupt their trading activity to engage in compliance exercises and did not see the need to explain, or even share, the details of their algorithms with compliance officers. The result was what the Financial Times columnist Gillian Tett calls the "Silo Effect".

Financial risks are often blamed on unrealistic models or poor judgement by traders. The value-at-risk models used by banks before the financial crisis is one famous example. But as Tett argues, risk may also derive from knowledge within firms being contained within discrete silos. Even though firms' risks are interconnected across divisions, black boxes develop due to a complex organisational division of labour. By requiring compliance officers to introduce systems for identifying and monitoring algorithms, the tagging rule seems to have helped to break up these knowledge silos within trading firms.

As some of my interviewees put it, the German HFT Act has given compliance officers a strong mandate for forcing quants, coders and traders to get around the table and discuss the details of their algorithms. In one firm I spoke to, this has resulted in compliance officers sitting in the proprietary trading room and making traders justify changes they make to their algorithms' code in order to determine if a new tag is required. A related change has been the need for technical up-skilling of compliance officers. It is no longer adequate for compliance officers to approach their work through a purely legalistic lens; after the introduction of the algorithm tagging rule they now also need to be proficient in the technical aspects of trading.

What these findings suggest is that the tagging-rule may be proving useful both for the end-users of the data (trade surveillance officers) as well as the creators of the data (the compliance and technical teams who have implemented the rule).

The future of market regulation

Six years on from the so-called 'Flash Crash' of 6 May 2010, which cast, in the publics' eyes, a suspicious aura around automated trading, there is still no global consensus about how algorithmic and high frequency trading should be regulated. Indeed, this inertia has led some to question the wisdom of regulatory efforts. For techno-libertarians, regulation will always trail too far behind market practices to be effective.

The complexity of automated markets and the sheer volume of data they generate means that government bodies cannot regulate them; the market should be left to devise its own solutions. Others point out that regulators, most of whom have legal backgrounds, simply haven't got a sufficient grasp of the technology. Regulators' clumsy attempts to bring order and transparency to markets just reflects their ignorance about how far we have come from the days of open-outcry pit trading.

There may be some truth to these views. However, I believe what my study of the algorithm tagging rule shows is that regulators do not need to be all-knowing or all-seeing to make a difference; they need not be omnipotent beings. As demonstrated by the story of how German regulators defined an algorithm, regulators can draw on the knowledge of market participants for effective rule-making. What is more, the data regulators make use of does not need to be completely accurate to be useful. Incremental improvements to regulatory knowledge can be just as effective as grandiose schemes to bring about market transparency (think, for example, of the perpetually-delayed Consolidated Audit Trail project in the United States).

The widespread use of algorithms is certainly one of the most challenging developments financial regulators are grappling with today. But as the implementation of the algorithm tagging rule shows, if regulators have a clear problem in mind and take a flexible approach to solving it, then they might be rewarded with improvements to their practices and changes in financial culture that are beneficial for all parties concerned.

References

Coombs, N. 2016 Economy and Society, Volume 45, Issue 2
http://www.tandfonline.com/doi/abs/10.1080/03085147.2016.1213977