top of page

Should you be scared of AI?

  • Writer: Ben M
    Ben M
  • Sep 30, 2023
  • 5 min read

Updated: Nov 1, 2023


"Last month, AI was officially classed as a security threat to the UK for the first time following the publication of the National Risk Register (NRR) 2023. The extensive document details the various threats that could have a significant impact on the UK's safety, security, or critical systems at a national level. The latest version describes AI as a "chronic risk," meaning it poses a threat over the long term, as opposed to an acute one such as a terror attack."


Why is the UK Gov categorising AI as a national security risk?


The potential misuse of Artificial Intelligence in the spheres of technology, finance, industry & geopolitical competition create a requirement for the government to safeguard critical infrastructure systems and information. In this blog I want to explore some perceived insights into why AI could be considered as a national security risk:


Potential for Misuse

- Automated Warfare: AI has the potential to revolutionise warfare. Autonomous weapons systems, driven by AI, could be employed in conflict scenarios, with the risk of escalation and inadvertent engagements, especially if control over these systems is lost or if they are hacked. We don't yet know how AI systems might handle conflict with other AI systems. Except... we do!

We have multiple scenarios to call on where AI battles AI. Take chess for example. In every situation where a system is deployed, a modern Chess algorithm will beat even the greatest collection of human Grand Masters. When AI takes on another AI the battles are rapid and result in improvements to both AI systems. The scores are roughly matched.

However, when an AI system collaborates with a human player (especially a Grand Master) it's mostly unbeatable. Many experts hypothesise that the human injects a level of chaos into the system that helps promote an edge in the battle.


- Misinformation and Deepfakes: AI is already being utilised to fabricate convincing fake videos and audio recordings, known as deepfakes, which are already being employed to spread misinformation, destabilise political scenarios, or engage in malicious influence campaigns. Deepfakes could be the new frontier in fraud offending, information warfare and open source intelligence gathering. What was once hypothetical technology, or at the very least beyond the reach of an average person, has now become available, even easy to wield. The answer is to keep an open mind to everything we're fed on the internet and even through mainstream/traditional media channels.



Geopolitical Competition and Unrest

- Technological Supremacy: Nations are vying for dominance in AI, as the technology is seen as a key determinant of future economic and military power. The UK, like other nations, might view AI as a national security risk if it lags in this global competition, it can't risk becoming reliant on foreign AI technologies or falling behind in military applications. The UK is a legitimate world power in scientific research, development, delivery and application. The world has benefitted from untold innovations at the hands of UK scientists and AI is no different. The battle will be to create systems that conduct the automation in a way which is consistent with the values of a mature democracy like that of the UK.


Protection of Critical Infrastructure

- Cybersecurity: As well as encouraging doom scrolling in TikTok, AI can be a critical tool for both offensive and defensive cybersecurity operations. It can bolster defence mechanisms by detecting and mitigating threats in real time, but it can also be employed by adversaries to conduct sophisticated cyber-attacks. We need to stop thinking about AI as an independent sentient being. It isn't that (yet), it's a library of knowledge with a really fast index to find the bits from all the books in that library fast enough to present a very good approximation of the right answer. It doesn't just need the author, title, chapter etc... it can use thousands of heuristics to help it find the pertinent information across its data set. Imagine a service that can model and replay the hardest hacks ever attempted in a fraction of the time in order to make an exceptionally good guess at what the outcome of the attempt might be. Think Dr Strange in Endgame.


- Supply Chain Security: AI components and software often form part of critical infrastructure systems. Ensuring the security and integrity of these systems, especially against the backdrop of complex global supply chains, is paramount for UK gov and it's allies. Every service and dependency pulled in from across the internet is a potential attack vector and the scale of the tasks at hand is amplifying those vectors multi-fold. The nation will need to structure its engineering enterprises in such a way as to mitigate and highlight risk through supply chain attacks. Only talented software architects and engineers collaboratively working with machine learning models will be able to adequately prepare supply chains for risk mitigation on that scale.


Regulatory and Ethical Challenges

- Privacy: The data-centric nature of AI raises substantial privacy concerns. Governments need to balance the drive for innovation with the preservation of citizens' privacy and data protection rights.

Our democracy is based on the idea of individual freedoms. We are allowed to think. We are allowed to rebel. We are allowed to live. All without the worry of one single entity, be that government or corporation using our data, our information, against us. However, we've deployed a service so resource-powerful it can model people's thoughts, into the hands of every person with an internet connection. The government faces a task of closing Pandora's box without knowing what's escaped. Regulatory provision is necessary but it must be pragmatic. There is little point in legislating for technical advance now. The legal system itself is too slow to react to the point of it being redundant and irrelevant. Ethically engineering culture is our only hope.


- Bias and Discrimination: AI systems can perpetuate or even exacerbate existing social biases if not properly designed, regulated, and audited. You've seen this every time you look at TikTok or Facebook or Instagram. The system knows you. At least it thinks it knows you. The kicker here is it's a self defeating circle. The system guesses you. Sometimes it's right and sometimes it's wrong. However in the process it's changing you. It's pointing you in the direction it wants you to go. Little nudges in the direction it's masters encourage. You become it and it becomes you.

Again the only mitigation is an open mind.


Economic Implications

- Job Displacement: There's concern that AI and automation could displace a significant number of jobs, which could have both economic and societal ramifications. This, in turn, could pose a security risk if large segments of the population find themselves unemployed and disenfranchised.

In light of these factors, I hope the UK government categorises AI as a national security risk so it can ensure a comprehensive regulatory framework and promote ethical AI development and deployment. Whilst the goal is to harness the benefits of AI and mitigate associated risks. The practice will be different. We will definitely lose jobs. We have definitely lost jobs already. Walk into Sainsbury's and you'll see it for yourself. Checkout staff are rarer. Jobs that can be automated will be automated. A company has the goal of making and saving money. It can do this with automation. The difference now is that the ode professions like accountacy, law and engineering can also have elements automated that were inconceivable 2 years ago. The government must prepare to discuss economic principles it has never officially considered in the past and we must open our own minds to the ideas of educating the next generations of people in the administration and development of AI over and above traditional education.

Comments

Rated 0 out of 5 stars.
No ratings yet

Add a rating
bottom of page