We have been fantasising about artificial intelligence for a long time. This obsession materialises in some cultural masterpieces, with movies or books such as 2001: A Space Odyssey, Metropolis, Blade Runner, The Matrix, I, Robot, Westworld, and more. Most raise deep philosophical questions about human nature, but also explore the potential behaviours and ethics of artificial intelligence, usually through a rather pessimistic lens. Although they are only works of fiction, this goes to show how wary we are about our creations becoming our masters.
The democratisation of AI reached a new step when large language models emerged. But for all the praise they have received, they have rung an equivalent amount of alarm bells. We have quickly witnessed flaws inherent in these new AI models, such as hallucinations, or unethical usage including misinformation and copyright infringements, raising concerns and calls from the brightest minds in the space. Their points were that we shouldn’t enter an AI innovation race without considering the right security and ethical guardrails to mitigate the threat of AI for malicious purposes, or the creation of defective AI systems that could have strong ramifications on our society.
Conversations about regulating AI are happening worldwide, which should help foster healthy progress. Members of the EU are leading this effort, and already agreed the AI Act back in December, which is hoped to influence other regulations globally, comparable to what the GDPR did for global privacy. In November, a number of nations also signed an agreement to make security the number one priority in AI design requirements.
It is reassuring to see proactive governments starting to adopt AI legislation and regulations, but the legislative pace is such that we could still be a couple of years away from them having an actual impact on mitigating the unethical and unsafe use of the technology. In the meantime, organisations need to take the matter into their own hands. More companies than ever will have the opportunity to consume, experiment with, integrate, and develop AI systems in the upcoming months and years, and there are existing principles that should be considered and used as guidelines to do so responsibly.
- Security and privacy covers four pillars:
- Using AI securely, for example by ensuring that sensitive data is not exposed to public GenAI tools, and privacy is not jeopardised. It also means considering the ethical aspects. Some jurisdictions have started penalising companies using biased AI, which may become an