The Killer Humans Behind Killer Computers

Written by icyapril | Published 2024/04/10
Tech Story Tags: ai | disasters | horizon-it-scandal | future-of-ai | automation | ai-ethics | software-engineering | killer-computers

TLDRThe public used to treat computer disasters as natural phenomena, recent scandals like the Horizon IT Scandal in the UK have shed light on the negligence, cover-up and wrongdoing behind this. Junade Ali explores other cases in a need book on killer computers.via the TL;DR App

Yuan Yi Zhu, an Assistant Professor of International Relations and International Law at Leiden University, recently criticized an article published in The Times of London. The article was written by Matthew Parris, a former Member of the British Parliament.

In the article, Parris argues in favor of legalizing euthanasia on the grounds that society simply cannot afford “desperate infirmity for as many such individuals as our society is producing.” Parris argues that it would allow the UK economy to better compete with China and people being coerced to being euthanised due to social pressure is “not a bad thing”.

Parris says he does not apologise for how his article "treats human beings as units - in deficit or surplus to the collective".

In February 2024, an article was published in the prestigious Scientific Reports journal by Nature, arguing that “AI holds the potential to carry out several major activities at much lower emission levels than can humans.” The piece argues that humans produce far greater CO2 emissions when doing writing or illustration work than AI Large Language Models. The article also notes that: “The freed human time may also incur new, unexpected environmental costs.”

Advances in AI haven’t just led to humans being compared to machines but also being compared to animals. Late last year, according to The Guardian, BT’s Chief Innovation Officer, Harmeen Mehta ‘suggested workers whose jobs are threatened by AI accept their fate as “evolution,” comparing them to horses replaced by the car.’

Against this backdrop, one does wonder when we will inevitably see the rise of those who will argue for eugenics such that humans replace machines. This is nevertheless a stark reminder of how humans ultimately are behind killer computers.

In a new book I authored called ‘How to Protect Yourself from Killer Computers,’ I have explored many catastrophic computer disasters. These disasters have included an aircraft entering ‘death dives’, fatal car crashes and lethal radiation doses.

Recently, software reliability has been pushed to the forefront of the public conscience. March 2024 saw system failures, causing McDonald’s outlets in the UK, Australia, and Japan to go offline. Panama Bread in the US saw IT outages, as did the UK high-street giants Tesco, Sainsbury's, and Greggs. As several Boeing executives stood down from their jobs following the death of a whistleblower, it’s worth remembering how the first Boeing 737 MAX groundings began with a computer software problem. Previously unreported, it was also the case that March 2024 saw London police officers report chaos in jails and being unable to begin search warrant applications, as there was also an outage with the Metropolitan Police’s IT systems.

The public inquiry into the UK Post Office’s Horizon IT Scandal is underway, a scandal where faulty accounting software resulted in miscarriages of justice, claiming around 700 victims and has been linked to four suicides. This included a pregnant mother being sent to jail.

Engineers are now facing the consequences for being complicit in disasters; with FTX’s former Director of Engineering facing up to 75 years in prison, a VW engineer who was complicit in the emissions scandal having been sentenced to 40 months imprisonment despite co-operating with investigators and the architect of the Post Office Horizon IT system currently under investigation by UK police for perjury.

In common across all these disasters, whilst the technical factors varied, I found a common theme of wrongdoing, retaliation, and cover-up. Issues were brushed under the carpet instead of addressed directly.

For example, one engineering manager at Fujitsu, who built the software behind the Post Office scandal, told the public inquiry how he “was threatened with violence by an individual with a reputation for violence” during one of his early attempts to remedy the issues.

This is an industry-wide problem. In November 2023, I led research that found that 53% of software engineers suspected wrongdoing at work, with 75% of software engineers in the UK having experienced retaliation the last time they reported it to their employers.

This is a concern as computers do inevitably make mistakes as no software is perfect. Small things like a binary number being flipped (e.g., a “1” becoming a “0”) due to cosmic rays are enough to cause havoc in computer systems, particularly those that haven’t been designed with this risk in mind.

There is, however, hope here.

The research conducted in the new book finds that resilient systems can be built when humans are part of the system. In these socio-technical systems, humans play a role in preventing disaster by raising the alarm and mitigating the issues.

For this to happen, humans must be free to raise the alarm to issues when they come across them. If the on-call engineers who keep our online services and critical national infrastructure running are not able to speak up and act to prevent disaster, these problems emerge. However, this applies to all software engineers and, indeed, everyone who is part of the software development process, including the customers.

Killer computers are born in toxic cultures where engineers aren’t able to raise the alarm, and issues are brushed under the carpet. By making sure humans play a key role in the reliability of computer systems, we can help ensure technology serves society.


Written by icyapril | Software engineering manager, author and computer scientist.
Published by HackerNoon on 2024/04/10