Killer Robots and the Changing Nature of Research Ethics
(Last updated: 13 May 2021)
Since 2006, Oxbridge Essays has been the UK’s leading paid essay-writing and dissertation service
We have helped 10,000s of undergraduate, Masters and PhD students to maximise their grades in essays, dissertations, model-exam answers, applications and other materials.
If you would like a free chat about your project with one of our UK staff, then please just reach out on one of the methods below.
It’s fair to say that university research ethics isn’t a topic that ranks very high in the public consciousness. But in the past few weeks, we’ve seen quite a few stories that address – albeit in very different ways – a subject that’s normally restricted to discussions between academics and university administrators.
Let’s face it if you want a story to capture the imagination, having killer robots in it never hurts. And while they may sound like science fiction, killer robots made global news at the start of April. These stories – some of them illustrated with stills from the Terminator movies – reported that more than fifty leading academics in the field of Artificial Intelligence research had called for a boycott of the Korea Advanced Institute of Science and Technology (KAIST).
The proposed boycott centred around fears that the Institute and its partner, the defence manufacturer Hanwha Systems, were conducting research that could eventually lead to the manufacture and sale of autonomous weapons. Advances in robotics technology over the past few years have raised concerns that futuristic-sounding autonomous weapons – or “killer robots” – may now be within reach. A UN meeting on autonomous weapons is scheduled for this month, with over twenty countries having already called for an absolute ban.
Why were the researchers calling for a boycott?
Although the boycott was eventually called off after KAIST offered assurances about its intentions, the debate touched on important questions about research ethics. Research for its own stake is the lifeblood of academia and universities, but most scholars would like to believe that their activities are to the collective benefit of humanity. But even the greatest discoveries can also lead indirectly to harm and loss of life.
Albert Einstein is widely regarded as the greatest physicist of the twentieth century. But after witnessing the destruction of the Hiroshima bomb, which was made possible in part by his discoveries in nuclear physics, Einstein famously remarked: “If only I had known, I should have become a watchmaker.”
“Pure” research and impact
But while there has always been a tension between “pure” research and the ends to which insights are eventually put, these tensions have been magnified in recent years by the “impact” culture around research. These days, exercises such as the UK’s Research Excellence Framework (REF) explicitly require researchers to prove that their research has “real-world” applications and to work with governmental or commercial partners to put their research to work. Researchers are actively encouraged to develop commercial spin-out ventures to maximise the profit value of their research.
None of this is especially evident in research ethics policies, though. Read through any university’s processes for obtaining ethics clearance and you’ll notice a couple of things. Firstly, they tend to assume that the research is being conducted for its own sake. And secondly, they focus overwhelmingly on the treatment of human subjects and ensuring anonymity and protections for participants in a social study or clinical trial. These two factors mean they generally have very little to say about where research data ends up or the ethical implications of its eventual use.
Time for a public conversation
Killer robots may be the most eye-catching recent story about research ethics, but it’s certainly not the only one – nor, arguably, the most important. The recent scandal involving Cambridge Analytica and Facebook might well be a watershed moment for thinking about the relationships between academic research and commercial enterprise, and the ethical implications of that.
Aleksandr Kogan, the academic who developed the software used to mine the data of millions of Facebook users, has claimed that he acted appropriately and in accordance with Cambridge University’s ethics policies at all times. His interest in the data mining, he argues, was purely academic and for the purposes of legitimate social science research. He now claims he’s being made a “scapegoat” by Facebook and Cambridge Analytica.
Kogan’s defence goes to the heart of the sometimes murky relationships between academic research and the stakeholders that part-fund and benefit from this research. The changing – and increasingly commercialised – nature of academic research arguably means that there’s no such thing as “pure” scholarly research anymore and that ethics policies need to be updated and expanded as a result.
A public conversation on the nature and ethics of contemporary scholarly research is overdue. And if data mining is a bit too abstract to prompt that conversation, we’ve always got killer robots.