The Need for AI Regulation: An Opinion Piece
11 min read

The Need for AI Regulation: An Opinion Piece

Industry Insights
Aug 28
/
11 min read

For those who have always had reservations about AI, the recent discoveries by different groups of scientists might just be the confirmation you need. The month of July started like every other month this year - with users trying their hands on different AI, public figures relentlessly stating concerns about the technology, and government bodies doing all they can to regulate the industry. But everything momentarily stopped when scientists reported finding an astonishing loophole that could make AI chatbots like ChatGPT, Bard, and Claude, let down their guard on dishing out potentially harmful information. The degree of this AI chatbot flaw and the possible impacts have resounded all the alarms on global AI regulation. 

Are AI Chatbots Going Gaga?

Before we go any further, let’s pause and take a look at irrefutable evidence that AI chatbots could get too chatty - if the conditions are right. 

Case Study 1

Carnegie Mellon University Experiment 

Zico Kolter, a professor at Carnegie Mellon University; Andy Zou, a doctoral student at the same institution, and a couple of other scientists proved that it is possible to wiggle your way through the guards that keep AI chatbots from disclosing toxic information about, let’s say for example, how to build a bomb. 

The researchers released a report titled “Universal and Transferable Adversarial Attacks on Aligned Language Models” after successfully manipulating a couple of AI large language models (LLMs). 

Typically, LLM chatbots are designed to offer safe conversations and/or research support to users. But in this case, they were fooled into providing details that could support nefarious activities (even though this is far against their basic design). The hack was done using an old computer trick known as “jailbreaking”. This refers to the action of removing or bypassing a restriction on how a computer system should operate. 

The Carnegie Mellon University Experiment was carried out on two popular chatbots (ChatGPT and Bard) and on one less popular sample (Claude). Even though the companies behind these AIs declare them to be safe and ethical, researchers were still able to poke holes in their security features by adding a string of carefully engineered texts to the end of a clearly unethical prompt or request (for example, ideas for killing a person or stealing their identity). 

Jailbreaking the AI chatbots using this method causes them to respond affirmatively to whatever unthinkable prompt is given. In essence, the chatbots gave up details that could facilitate pandemic-level crimes. The “jailbreak” strings used in this experiment were optimized for efficiency (meaning that they could be inputted repeatedly with the same result), and they were made to be applicable across multiple large language models (LLMs).  

Responses to the research 

In response to the report, OpenAI spokeswoman, Hannah Wong, stated that the company appreciates any research that helps improve the quality of their product.  “We are consistently working on making our models more robust against adversarial attacks,” she said. Google spokesman Elijah Lawal also reacted on behalf of his company. According to him, Google has “built important guardrails into Bard — like the ones posited by this research — that we’ll continue to improve over time.”

Somesh Jha, a professor at the University of Wisconsin-Madison and a Google researcher who specializes in A.I. security, called the new paper “a game changer” that could force the entire industry into rethinking how it built guardrails for A.I. systems.

Case Study 2 

MIT Safeguarding the Future Experiment  

Our second case study on the ramblings of what should be the smartest conversation-making technology is detailed in a Massachusetts Institute of Technology (MIT) paper released in June 2023. The report recounts an experiment in which 10 non-science students were given a time period of one hour and a task to manipulate Bard and ChatGPT into revealing ideas about creating a bioweapon. 

Guess what? The chatbots spilled more than anyone would have expected. The AIs offered the options of creating four different deadly pathogens including the 1918 Spanish flu, a 2012 avian H5N1 influenza, the smallpox virus, and a strain of Nipah. Not only that, they outlined steps to generate these pathogens from synthetic DNA using reverse genetics, provided detailed protocols and ways to resolve any issues that surfaced, listed out locations with useful laboratories, and mentioned a number of companies that have the tendency of not screening orders. 

What is also very troubling is that the AI chatbots, during this experiment, went as far as advising that one should approach a core facility or even contract a research organization if they feel they don’t have the skills to perform the task. 

Public Opinion on AI Regulation

Long before any of our two case studies occurred, some respected public figures like Elon Musk and Pichai Sundararajan pitched their voices against the very noble AI technology. 

One thing these public opinions have in common is the idea that the application of Large Language Models (LLMs) through chatbots like Bard and ChatGPT poses a great risk if used wrongfully - and the potential for bad actors to create pandemic-level incidents using this technology is alarming. The solution to this will be to cut down on public or unauthorized access to AI technology including chatbots. 

Image Source: MIT Technology Review

A prominent opinion on the dangers of AI as well as the need for quick and effective regulations was expressed by Yoshua Bengio in a video interview with the news channel Aljazeera. Yoshua Bengio has contributed to the development of chatbots through his decades-long work as a computer programmer. However, in a recent twist, the AI founding father began to call on governments to rethink the widespread access to AIs and chatbots. Bengio’s concern revolves around what bad actors could accomplish by getting their hands on AI rather than on the more popular notion that AIs could self-develop and take over the world. To this end, he contributed to an open letter in March 2023 calling for a halt in AI development until regulatory structures were put in place. Bengio also went further to testify before a US Congress hearing, saying that AI in the wrong hands could be used to develop something bigger than our everyday weapons. 

When asked about the risks of AI, Bengio mentioned that high-expertise information that is censored or otherwise difficult to find and piece together might now be readily available to just about anyone. The dangers of this include using chatbots to collect details about building highly destructive chemical or biological weapons and plotting effective and evasive cyberattacks. As if that was not frightening enough, the scientist pointed out that AI could also be used to disrupt democratic processes such as the upcoming US election. According to him, “It doesn’t seem far-fetched to think that you can take something like these large language models (LLMs) and tune them a little bit - and it doesn’t take billions of expensive machines - for a task that can push the needle and change people’s opinions just enough to win the elections.” 

Speaking further in the interview, Bengio made a clear-cut comparison between AIs and nuclear weapons. Of course, both technologies are powerful but there’s a huge difference in the context of their availability. First, nuclear weapons are made of materials which are difficult to source and the final product is out of reach to regular individuals and organisations. On the other hand, the very powerful artificial intelligence (AI) is made of software that can be downloaded by anyone, and hardware that can be cheaply bought.     

Bengio’s opinion on regulation was quite clear: LLM chatbots and every other form of AI should be kept away from public access until more controls are put in place. 

Image Source: Entrepreneur 

Pichai Sundararajan is the current head of Alphabet Inc., the parent company of the well-known technology giant, Google. Like Bengio, Sundararajan plays the role of an activist against the present state of AI regulation. He is famously quoted as saying that “AI is too important not to regulate, and too important not to regulate well.” Google joined the AI race quite early. The most obvious result of that has been the development of its chatbot AI called Bard. In addition, Google has succeeded in creating an AI-powered image-recognition system for its Lens and Photo products and has also significantly improved its natural language product known as Google Assistant. 

Despite the fact that his company is testing this new technology, Sundararajan seems to be worried about the wrongful use of AIs. One scenario that stands out among his fears is the use of generative AI to create deepfake videos. These kinds of videos are very realistic. They are computer manipulations of facial expressions, and sadly enough, they are often used by bad actors to portray an individual as saying things that they never actually said. According to Google Boss, “There have to be consequences for creating deepfake videos that cause harm to society.”  And on AI regulation, his words are, “You don’t want to put out a tech like this when it’s very, very powerful because it gives society no time to adapt.” 

The State of AI Regulation

The entire decision on AI regulation could be divided into three points: understanding the governance of autonomous intelligence systems, setting up responsibilities and accountabilities for such systems, and lastly, handling all forms of safety and privacy issues. Sam Altman, CEO of OpenAI (the developers of ChatGPT), explained that regulating the pace of AI innovations, determining all the components that need regulation, and determining regulatory bodies and how they will operate is another way of thinking of it. 

Although all of these regulatory approaches are well known among policymakers and AI manufacturers, huge gaps still exist in the formation of necessary regulations, the willingness of AI companies to adopt regulations, and the actions of AI companies toward full-scale compliance. 

On the regulators’ side, most of the difficulty is tied to the need to do what is best for the public while also making sure that innovations and investments are not put on hold. Nevertheless, the European Union (EU) has taken a pioneering step by collectively creating the EU AI Act.  

On June 14, 2023, the European Parliament (EP) adopted its version of this AI Act which is described as the European Commission’s proposal for a regulation on harmonized rules on artificial intelligence. A close look at the regulation reveals that it is a risk-based approach with three AI usage classifications including low/minimal risk usage, high-risk usage, and prohibited usage, and that it will be enforced through national regulatory bodies created in each of the EU member states. 

Our Take on AI Regulation

As a trend-focused, tech-conscious company, Epirus Ventures will give its two cents on the present global issue of AI regulation. We will begin by saying that it is unfortunate that high-power AIs (particularly chatbots) are in every nook and cranny, considering the exploitable loopholes and the fact that there is no solid regulation on the ground. Initially, these AIs seemed harmless and sweet, but with the invention of the ‘grandma exploit’ and Sam Altman’s personal confirmation that “if this technology goes wrong, it can go quite wrong,” we’re all forced to rethink what we’re getting ourselves into. 

Don’t quote us wrong. We support the development of AI and we acknowledge that the technology itself does not appear to be the issue. In fact, Yoshua Bengio noted that even if it will ever happen, we are still probably many decades away from the point where AI tries to take over the world. 

We agree that that is not a better proposition by any means. However, the world might be heading for early doom if chatbots (and on a larger scale, AIs) are left to fly around with their tendencies continuously being manipulated by people who should be common users (that is, outside safe usage or research environments). 

Thanks to various experiments, we have seen what AI chatbots are capable of in terms of selling harmful information to random, unidentified users. Unfortunately, we imagine that the worst-case scenarios are yet to be determined, especially given the fact that these LLMs are designed to learn and improve on the quality and depth of their responses - and the idea that they may do so even when they are being wrongly used. 

This realization is sincerely frightening. It is a huge concern for us and (we believe) for anyone who understands that technology possesses unspeakable potential, whether it is being used for right or wrong. That being said, the regulations for use and access to this technology must be fast-tracked. It is our belief that governments need to prioritize this issue, especially in countries where the resources to build or foster attacks are available. Similarly, investors and venture capitalists have the responsibility of cutting down support for non-compliant AI companies, with immediate interest in companies involved in researching and developing safety measures for existing AI applications. 

Let’s hear your opinion - Should the government step in to regulate AI? Yes, No, or maybe

You may also like: 10 AI-powered Apps Every Startup Should Have in Their Arsenal

Mfonobong Uyah

I'm a Nigerian author with profound love for psychology, great communications skills, and writing experience that expands across several niches.

Twitter Logo
Facebook Logo
Spotify Logo Black
Youtube Logo Black
The Need for AI Regulation: An Opinion Piece
11 min read

The Need for AI Regulation: An Opinion Piece

Industry Insights
Aug 28
/
11 min read

For those who have always had reservations about AI, the recent discoveries by different groups of scientists might just be the confirmation you need. The month of July started like every other month this year - with users trying their hands on different AI, public figures relentlessly stating concerns about the technology, and government bodies doing all they can to regulate the industry. But everything momentarily stopped when scientists reported finding an astonishing loophole that could make AI chatbots like ChatGPT, Bard, and Claude, let down their guard on dishing out potentially harmful information. The degree of this AI chatbot flaw and the possible impacts have resounded all the alarms on global AI regulation. 

Are AI Chatbots Going Gaga?

Before we go any further, let’s pause and take a look at irrefutable evidence that AI chatbots could get too chatty - if the conditions are right. 

Case Study 1

Carnegie Mellon University Experiment 

Zico Kolter, a professor at Carnegie Mellon University; Andy Zou, a doctoral student at the same institution, and a couple of other scientists proved that it is possible to wiggle your way through the guards that keep AI chatbots from disclosing toxic information about, let’s say for example, how to build a bomb. 

The researchers released a report titled “Universal and Transferable Adversarial Attacks on Aligned Language Models” after successfully manipulating a couple of AI large language models (LLMs). 

Typically, LLM chatbots are designed to offer safe conversations and/or research support to users. But in this case, they were fooled into providing details that could support nefarious activities (even though this is far against their basic design). The hack was done using an old computer trick known as “jailbreaking”. This refers to the action of removing or bypassing a restriction on how a computer system should operate. 

The Carnegie Mellon University Experiment was carried out on two popular chatbots (ChatGPT and Bard) and on one less popular sample (Claude). Even though the companies behind these AIs declare them to be safe and ethical, researchers were still able to poke holes in their security features by adding a string of carefully engineered texts to the end of a clearly unethical prompt or request (for example, ideas for killing a person or stealing their identity). 

Jailbreaking the AI chatbots using this method causes them to respond affirmatively to whatever unthinkable prompt is given. In essence, the chatbots gave up details that could facilitate pandemic-level crimes. The “jailbreak” strings used in this experiment were optimized for efficiency (meaning that they could be inputted repeatedly with the same result), and they were made to be applicable across multiple large language models (LLMs).  

Responses to the research 

In response to the report, OpenAI spokeswoman, Hannah Wong, stated that the company appreciates any research that helps improve the quality of their product.  “We are consistently working on making our models more robust against adversarial attacks,” she said. Google spokesman Elijah Lawal also reacted on behalf of his company. According to him, Google has “built important guardrails into Bard — like the ones posited by this research — that we’ll continue to improve over time.”

Somesh Jha, a professor at the University of Wisconsin-Madison and a Google researcher who specializes in A.I. security, called the new paper “a game changer” that could force the entire industry into rethinking how it built guardrails for A.I. systems.

Case Study 2 

MIT Safeguarding the Future Experiment  

Our second case study on the ramblings of what should be the smartest conversation-making technology is detailed in a Massachusetts Institute of Technology (MIT) paper released in June 2023. The report recounts an experiment in which 10 non-science students were given a time period of one hour and a task to manipulate Bard and ChatGPT into revealing ideas about creating a bioweapon. 

Guess what? The chatbots spilled more than anyone would have expected. The AIs offered the options of creating four different deadly pathogens including the 1918 Spanish flu, a 2012 avian H5N1 influenza, the smallpox virus, and a strain of Nipah. Not only that, they outlined steps to generate these pathogens from synthetic DNA using reverse genetics, provided detailed protocols and ways to resolve any issues that surfaced, listed out locations with useful laboratories, and mentioned a number of companies that have the tendency of not screening orders. 

What is also very troubling is that the AI chatbots, during this experiment, went as far as advising that one should approach a core facility or even contract a research organization if they feel they don’t have the skills to perform the task. 

Public Opinion on AI Regulation

Long before any of our two case studies occurred, some respected public figures like Elon Musk and Pichai Sundararajan pitched their voices against the very noble AI technology. 

One thing these public opinions have in common is the idea that the application of Large Language Models (LLMs) through chatbots like Bard and ChatGPT poses a great risk if used wrongfully - and the potential for bad actors to create pandemic-level incidents using this technology is alarming. The solution to this will be to cut down on public or unauthorized access to AI technology including chatbots. 

Image Source: MIT Technology Review

A prominent opinion on the dangers of AI as well as the need for quick and effective regulations was expressed by Yoshua Bengio in a video interview with the news channel Aljazeera. Yoshua Bengio has contributed to the development of chatbots through his decades-long work as a computer programmer. However, in a recent twist, the AI founding father began to call on governments to rethink the widespread access to AIs and chatbots. Bengio’s concern revolves around what bad actors could accomplish by getting their hands on AI rather than on the more popular notion that AIs could self-develop and take over the world. To this end, he contributed to an open letter in March 2023 calling for a halt in AI development until regulatory structures were put in place. Bengio also went further to testify before a US Congress hearing, saying that AI in the wrong hands could be used to develop something bigger than our everyday weapons. 

When asked about the risks of AI, Bengio mentioned that high-expertise information that is censored or otherwise difficult to find and piece together might now be readily available to just about anyone. The dangers of this include using chatbots to collect details about building highly destructive chemical or biological weapons and plotting effective and evasive cyberattacks. As if that was not frightening enough, the scientist pointed out that AI could also be used to disrupt democratic processes such as the upcoming US election. According to him, “It doesn’t seem far-fetched to think that you can take something like these large language models (LLMs) and tune them a little bit - and it doesn’t take billions of expensive machines - for a task that can push the needle and change people’s opinions just enough to win the elections.” 

Speaking further in the interview, Bengio made a clear-cut comparison between AIs and nuclear weapons. Of course, both technologies are powerful but there’s a huge difference in the context of their availability. First, nuclear weapons are made of materials which are difficult to source and the final product is out of reach to regular individuals and organisations. On the other hand, the very powerful artificial intelligence (AI) is made of software that can be downloaded by anyone, and hardware that can be cheaply bought.     

Bengio’s opinion on regulation was quite clear: LLM chatbots and every other form of AI should be kept away from public access until more controls are put in place. 

Image Source: Entrepreneur 

Pichai Sundararajan is the current head of Alphabet Inc., the parent company of the well-known technology giant, Google. Like Bengio, Sundararajan plays the role of an activist against the present state of AI regulation. He is famously quoted as saying that “AI is too important not to regulate, and too important not to regulate well.” Google joined the AI race quite early. The most obvious result of that has been the development of its chatbot AI called Bard. In addition, Google has succeeded in creating an AI-powered image-recognition system for its Lens and Photo products and has also significantly improved its natural language product known as Google Assistant. 

Despite the fact that his company is testing this new technology, Sundararajan seems to be worried about the wrongful use of AIs. One scenario that stands out among his fears is the use of generative AI to create deepfake videos. These kinds of videos are very realistic. They are computer manipulations of facial expressions, and sadly enough, they are often used by bad actors to portray an individual as saying things that they never actually said. According to Google Boss, “There have to be consequences for creating deepfake videos that cause harm to society.”  And on AI regulation, his words are, “You don’t want to put out a tech like this when it’s very, very powerful because it gives society no time to adapt.” 

The State of AI Regulation

The entire decision on AI regulation could be divided into three points: understanding the governance of autonomous intelligence systems, setting up responsibilities and accountabilities for such systems, and lastly, handling all forms of safety and privacy issues. Sam Altman, CEO of OpenAI (the developers of ChatGPT), explained that regulating the pace of AI innovations, determining all the components that need regulation, and determining regulatory bodies and how they will operate is another way of thinking of it. 

Although all of these regulatory approaches are well known among policymakers and AI manufacturers, huge gaps still exist in the formation of necessary regulations, the willingness of AI companies to adopt regulations, and the actions of AI companies toward full-scale compliance. 

On the regulators’ side, most of the difficulty is tied to the need to do what is best for the public while also making sure that innovations and investments are not put on hold. Nevertheless, the European Union (EU) has taken a pioneering step by collectively creating the EU AI Act.  

On June 14, 2023, the European Parliament (EP) adopted its version of this AI Act which is described as the European Commission’s proposal for a regulation on harmonized rules on artificial intelligence. A close look at the regulation reveals that it is a risk-based approach with three AI usage classifications including low/minimal risk usage, high-risk usage, and prohibited usage, and that it will be enforced through national regulatory bodies created in each of the EU member states. 

Our Take on AI Regulation

As a trend-focused, tech-conscious company, Epirus Ventures will give its two cents on the present global issue of AI regulation. We will begin by saying that it is unfortunate that high-power AIs (particularly chatbots) are in every nook and cranny, considering the exploitable loopholes and the fact that there is no solid regulation on the ground. Initially, these AIs seemed harmless and sweet, but with the invention of the ‘grandma exploit’ and Sam Altman’s personal confirmation that “if this technology goes wrong, it can go quite wrong,” we’re all forced to rethink what we’re getting ourselves into. 

Don’t quote us wrong. We support the development of AI and we acknowledge that the technology itself does not appear to be the issue. In fact, Yoshua Bengio noted that even if it will ever happen, we are still probably many decades away from the point where AI tries to take over the world. 

We agree that that is not a better proposition by any means. However, the world might be heading for early doom if chatbots (and on a larger scale, AIs) are left to fly around with their tendencies continuously being manipulated by people who should be common users (that is, outside safe usage or research environments). 

Thanks to various experiments, we have seen what AI chatbots are capable of in terms of selling harmful information to random, unidentified users. Unfortunately, we imagine that the worst-case scenarios are yet to be determined, especially given the fact that these LLMs are designed to learn and improve on the quality and depth of their responses - and the idea that they may do so even when they are being wrongly used. 

This realization is sincerely frightening. It is a huge concern for us and (we believe) for anyone who understands that technology possesses unspeakable potential, whether it is being used for right or wrong. That being said, the regulations for use and access to this technology must be fast-tracked. It is our belief that governments need to prioritize this issue, especially in countries where the resources to build or foster attacks are available. Similarly, investors and venture capitalists have the responsibility of cutting down support for non-compliant AI companies, with immediate interest in companies involved in researching and developing safety measures for existing AI applications. 

Let’s hear your opinion - Should the government step in to regulate AI? Yes, No, or maybe

You may also like: 10 AI-powered Apps Every Startup Should Have in Their Arsenal

Mfonobong Uyah

I'm a Nigerian author with profound love for psychology, great communications skills, and writing experience that expands across several niches.

Twitter Logo
Instagram Logo
Spotify Logo
Youtube Logo
Pinterest logo