Last term, the A-Level politics classes headed off to St Albans, to a debate in the soaring nave of the cathedral. The question? “Is AI (Artificial Intelligence) a force for good or for evil?”.

The panel was a mixed selection, from Daisy Cooper, the Liberal Democrat MP for St Albans to the Dean of the cathedral, Jo Kelly-Moore, from politics to ethics to religion; all areas into which the discussion of Artificial Intelligence has reached.

There were some excellent points raised by students from all across the county on both sides of the debate, but it was the undertone of the discussion that was most interesting. There seemed to me to be an implicit consensus on the certainty of Artificial Intelligence, and the inevitability of its impact on our future. The question which then remained for us to discuss was whether we should attempt to regulate and constrain; or if, instead, we should welcome this ‘progress’ with open arms, helping the cat out of the bag.

The idea of regulation was brought up repeatedly, but the question of who these ‘regulators’ are got to the heart of the issue. A panellist raised that regulation of a technology like this would need to be fully international to be truly effective, but I wonder if there is any principle; ethical, cultural or social, that we as a species can agree on.

Even a concept that is seemingly simple: “killer robots are probably a bad idea” for example, has already been recklessly cast aside by the now standard use of unmanned combat drones. Therefore, even if we accept a need to regulate and control these new technologies, as the Prime Minister’s recent summit with Big Tech firms seemed to conclude, it seems difficult to see how any such regulation could be written universally, and so how it could prevent the development of prohibited technologies by groups who would benefit from them.

How about the arguments for the benefits of the new technology?

Here was where the language of the debate was most revealing. Artificial Intelligence could help improve our ‘productivity’, it could help diagnose and treat cancer patients so they could ‘return to work more quickly’, and it would help to drive ‘growth’. It became a discussion of abstract and compound measurements, figures which lose sight of the human in an unsettling parallel to the threatened subversion of humanity by these technologies.

Perhaps I am overly dramatic and overly ideological, but Artificial intelligence is a development which is inherently ideological. It is a catalyst for discussion into what it means to be human, but less abstractly, what we want it to mean to be human. We may improve our productivity, always have access to a simulated friendly face, and be able to consume any form of media we could ever wish for, all generated before we have even asked for it. But after all of that, will we have lived better?

Blog entry by Oliver Kingsland, St Chris student.