Context by Cohere

How We’re Getting AI Risk Wrong

Share:

By Aidan Gomez

Artificial intelligence stands to change human experience more than any technology since the computer. Yet as a society, we seem distracted from the regulatory concerns that will matter the most in this new reality. The discussion about the risks of AI have varied from vague, to downright cynical–in a few cases executives seem to have even exaggerated certain risks to emphasize their own technological capabilities.

It is time for us to be serious and deliberate in addressing the clearest and most pressing risks that AI presents. Many of my peers have argued that we cannot afford to make the same mistakes made by delaying timely and thorough consideration of the challenges of social media.

They are correct.

However, spending our time and resources stoking existential fear of AI has served as a distraction–one might argue a convenient distraction–from many very familiar risks to those we faced with social media, that have the potential to be exacerbated by AI.

As AI and large language model (LLM) technology moves from a consumer novelty, to a core part of product and business, there are a set of challenges that need to be addressed. These include protecting sensitive data, mitigating bias and misinformation, and knowing when to keep humans in the loop for oversight.

These three areas are perhaps less extraordinary than the notion of a technology-enabled terminator taking over the world. However, they are the most likely and immediate threats to our collective wellbeing.

Over the last few decades, instances of private data becoming public have caused enormous damage. It is essential that industry minimize and mitigate potential risk for data leakage or data exposure, which is especially pertinent since some generative AI services train models on user data. We have already seen issues with AI companies improperly training their models on proprietary corporate data, a nightmare scenario for any privacy officer.

Similarly, as companies and the public sector integrate AI into their daily operations, it is critical that they have tools to tie output to accurate, authoritative, reliable sources of information to avoid basing important decisions on incorrect or out-of-date information. It is also essential that the industry establishes collaboration across the AI ecosystem to support research and develop industry best practices that avoid introducing bias during the model training and fine-tuning processes, which would erode trust and cause harm, especially to underrepresented groups.

Finally, we need to ensure that AI acts in the service of humanity. AI offers tremendous benefits to society and enormous productivity gains for the workforce, but in high-stakes scenarios such as healthcare and law, it cannot be deployed without the oversight and safeguards of having a human-in-the-loop. There is no question that AI will become more integral, and more integrated into our daily work, but it is equally clear that AI can not replace the role that humans play as a whole.

To those in the industry who earnestly believe that doomsday scenarios are the most serious risks that we face with AI, I welcome the difference of opinion, even as I respectfully disagree. I would say that we can work to address those more speculative scenarios, while also prioritizing the very real risks that we know currently exist, and could quickly become worse.

The challenges that we face as an industry and society are real as AI moves from proof of concept to production deployment. We need to remain clear headed, collaborative, and honest as we address these risks. Only with a realistic discussion of the issues we face, and a collaborative approach to addressing them, can we avoid our previous mistakes.

Aidan Gomez is the CEO and Co-founder of Cohere.

Keep reading