1. Market rigging among repentant demiurges and out-of-control artificial intelligence
Choral act of sorrow, negligeable real mea culpa, from swayed hosanna to doubtful crucifixion. It changes the target, not the style: social network scenarios replicated for artificial intelligence. Regretful demiurges grow up: from Bill Gates to Elon Musk to Jack Ma, at the end of May 2023 even from the brilliant contemporary creators Sam Altman (OpenAI and ChatGpt), Geoffrey Hinton (formerly Google), Demis Hassabis (Deepmind) and others only a desperate s.o.s. arises against the overwhelming power that Ai could assume to the point of dissolving humanity: posthumous clairvoyance to procreative excitement (will this happen to cryptocurrencies as well? We’ll talk about that at the first, real debacle). Those who created or sustained it, led by an almightiness feeling, now invoke rules. Mediatic hypocrisies aside, these are rules that the EU and – surprisingly – even the U.S. would very much like but in practice do not quite know how to actually write. What rules? Let us narrow an overly broad and “philosophical” spectrum to something smaller, tangible and dangerous: market rigging. Market rigging is, in a nutshell, rigged-card poker: trading after spreading false or misleading news, trading through artifice or deception, placing, taking out, re-placing, cancelling, reformulating orders to alter the course of the market, creating parallel and hypnotic trading, misleading investors, generating profits out of thin air. Now we add Ai, which could manipulate markets autonomously, perhaps replicating illicit patterns uncovered in neural networks but without perceiving their unlawfulness.
2. Regulatory background and new ruling frameworks
Algorithmic trading (At) has been around for more than two decades, its early heir high frequency trading that runs to the millisecond (Hft) for at least a decade. European regulation (Mifid2, Mad and Mar) has set certain precepts of continence: resilient systems capable of handling spikes in order volumes, ensuring orderly trading in critical market conditions and continuity of service, avoiding market abuse events. The assumption is that the machine is still instructed and governed by the human, which becomes responsible for any of its deviations or intemperance. Ai would seem to introduce a hiatus in this pedagogical process: the human can teach but the machine can learn on its own. Hence the distinction, evoked last May in a meticulous and valuable reconstruction (Consob – Quaderno giuridico n. 29/2023), between weak and strong Ai operating systems: the former instructed by the human who remains for that reason the sole responsible for the machine’s misdeeds, the latter able to act autonomously by reaching outcomes not foreseen by their own creators or trainers, thus blameless for the misdeeds of their creatures.
Of the three solutions now being contemplated only one convinces, partially. The first (punishing the machine) presupposes a science-fictional legal subjectivity that is absolutely inconceivable (machina delinquere non potest, rephrasing the famous late 19th century German scholar Franz von Liszt), even considering that, devoid of genuine feeling, the robot would be completely insensitive to the afflictive effect of the sanction, nor, by exasperating the theory, the outlook of extreme agony (its destruction) would deter it. The second (collectivizing the damage), in addition to smelling of surrender, overlooks that the principle – applicable to bank on-line frauds, where the customer is not liable except for malice or gross negligence and the bank, though blameless, redistributes the risk indiscernibly and compensates for it with the increased profits derived from the spread of home banking services – would not produce a true and equitable reallocation of the damage, since a market rigging could unfairly affect even very small groups of investors just guilty of having stumbled among the claws of a felonious device. The third solution (placing the risk on the creators and instructors in objective terms, i.e., disregarding awareness of that same risk) seems, in part, the most reasonable but incomplete way forward and perhaps not even so much in need of a specific rule – which also would not hurt.
3. Educating the cyber-trader by taking away computing materials
The problem, not simple but also not as complicated as one would like to portray it, must move from an undeniable truth: creativity does not belong to the machine, even when its astonishing performance spreads the opposite illusion. The machine calculates, lightning-fast and with extreme precision, but it still calculates, and in order to calculate, material must be made available to it. So, the problem lies in what the machine is taught, even in the negative, in the sense of what the machine cannot do.
This is well known by those who have tested ChatGpt by asking, with a precise and credible justification, embarrassing questions with a racist, sexist, discriminatory or otherwise always and very politically incorrect or deemed so. The serious justificatory context in which the questions are asked does not make a micron’s dent in the artificial brain’s choice to reject every answer, citing the inappropriateness of the topic. Why? Because those who created it have placed limits on its processing faculties, taking computational material away from it. To avoid Ai-driven manipulation, it is enough to teach the latter that easy profit practices implemented by replicating illicit financial models are similarly forbidden. By now, the most popular market rigging techniques are known, and it is therefore sufficient, in the programming phase, to inhibit the replication of those illicit practices and, with an additional quantum leap beneficial to the market, to make the machine also detect new trades that lead to a manipulative effect. The same applies to the exceptions allowed by law, which the cyber-trader will equally have to assimilate and know how to adapt to different situations. It is not a matter of placing an objective risk on the programmers (and those who in turn command them), but of applying elementary principles of prudence, diligence, and expertise.
I see programmers ready to throw in the towel. Multimillion-dollar fines and years in jail certainly don’t sit well with the expectations and salary of a computer scientist, assuming someone has explained to them what legality and compliance mean. Hence a chain of responsibilities, for which it is not so much necessary to discipline as to recognize that machines do not enjoy “techno-impunity”.
Welcome harsher and more precise rules, but do not forget that, back in the 1980s, early computer gurus made it clear that the computer is a dumb machine: if we input garbage, garbage will come out, reworked but still garbage. Exactly…!
 Managing Partner, Studio Ghidini, Girino & Associati – Thomson Reuters Stand-out Lawyer 2023