How should we police the trader bots? – Financial…

An interesting thing about flash crashes is that they should no longer happen. The rules to protect markets against algorithmically generated disorder are established, wide-ranging and highly demanding.
Problem is, those rules don’t seem to be applied.
Consequences can appear obvious, such as last month, when European markets sold off after a Citigroup trader in London reportedly added an extra zero to an order. Cross-market ripple effects that session strongly suggest that algos across multiple firms were failing to respond to thinner-than-usual volume and contributing to the turbulence. That in turn raises difficult questions around whether the same algos might be rocking the boat in less obviously stressed conditions.
An overarching rule of securities legislation is that market abuse is market abuse, irrespective of whether it’s committed by a human or machine. What matters is behaviour. An individual or firm can expect trouble if they threaten to undermine market integrity, destabilise an order book, send a misleading signal, or commit myriad other loosely defined infractions. The mechanism is largely irrelevant.
And importantly, an algorithm that misbehaves when pitted against another firm’s manipulative or borked trading strategy is also committing market abuse. Acting dumb under pressure is no more of an alibi for a robot than it is for a human.
For that reason, trading bots need to be tested before deployment. Firms must ensure not only that they will work in all weathers, but also that they won’t be bilked by fat finger errors or popular attack strategies such as momentum ignition. The intention here is to protect against cascading failures such as the “hot potato” effect that contributed to the 2010 flash crash, where algos didn’t recognise a liquidity shortage because they were trading rapidly between themselves.
Mifid II (in force from 2018) applies a very broad Voight-Kampff test. Investment companies using European venues are obliged to ensure that any algorithm won’t contribute to disorder and will keep working effectively “in stressed market conditions”. The burden of policing falls partly on exchanges, which should be asking members to certify before every deployment or upgrade that bots are fully tested in “real market conditions”.
But what that means in practice gets complicated quickly, because for specifics it’s necessary to dive into Mifid II’s Regulatory Technical Standards (RTS) updates.
RTS 6 sets out the basic self-assessment framework for investment firms to certify that their bots won’t cause antagonise markets. Its sequel, RTS 7, has a separate and completely different definition around whether bots will contribute to market disorder. In short, an RTS 7-compliant firm must certify that all systems won’t amplify any market convulsion, and must include an explanation of how this testing has been done.
RTS 6 is well understood, but how many trading firms are meeting the RTS 7 criteria? According to Nick Idelson, technical director of consultancy TraderServe, it’s likely that fewer than half have stress tested their algo strategies to the standard required. The scale and complexity of the job suggests even this estimate may be optimistic.
Mifid II’s definition of an algo lets through automated venue routing and catches pretty much everything else. If there’s “limited or no human intervention” required when generating a quote, it’s an algo. If pre-determined parameters control price, order size or timing, it’s an algo. If there’s any post-submission strategy in place other than straightforward execution, it’s an algo. Stress tests of these systems need to prove that everything will work as intended both individually and in combination.
Equally broad is the regulation’s reach, which applies to all Mifid II-defined financial instruments on any venue that allows or enables algorithmic trading. The “or enables” bit puts within its scope venues that ban automated trading strategies, as well as those without auto-matching trading systems. (See question 31 of ESMA’s 2021 Q&A.) Using a strict interpretation of the rules it’s almost impossible for any trade to meet best execution obligations without also being defined as automated.
Penalties for non-compliance are significant, at up to €15mn or 15 per cent of turnover for firms and up to four years in prison for individuals. The global picture is similar, with IOSCO’s market integrity principles providing a cross-border enforcement framework.
But in contrast to the US (where JPMorgan Chase landed a $920mn settlement in 2020 for spoofing precious metals futures) and Hong Kong (where Instinet Pacific and Bank of America have been fined for bot management failures) the approach to algo policing in the UK and Europe has been softly-softly. As the FCA noted in its May 2021 Market Watch bulletin:
Our internal surveillance algorithms identified trading by an algorithmic trading firm which raised potential concerns about the impact the algorithms responsible for executing the firm’s different trading strategies were having on the market. As a result of our inquiries, the firm adjusted the relevant algorithm and its control framework to help avoid the firm’s activity having an undue influence on the market.
One hurdle the regulators face is around definitions, because it’s tricky to pin down exactly what contribution-to-market-disorder stress testing means.
Is it enough for firms to run historic markets data through bots within a sandbox? Or does such an approach risk missing the feedback loops created when the fleet interacts with responsive markets? TraderServe has worked with regulators on best practices using live market simulations yet, according to Idelson, it remains impossible for an outsider to know whether any firm’s approach to testing was comprehensive, cursory or nonexistent. For that reason, establishing some public precedents would be useful.
Judged by the weaknesses exposed by the Citi flash crash, Europe’s nudge approach to bot regulation is looking insufficient. But the wide-ranging obligations set out in Mifid II make more proactive forms of policing difficult to maintain. If non-compliance is as widespread a it appears, the most effective form of enforcement available to regulators may be a good old-fashioned show trial.
Further reading:
AI Trading and the limits of EU law enforcement in deterring market manipulation — Computer Law & Security Review (PDF)