Do we need “license” to use AI chatbots like ChatGPT?
Since 2018, when OpenAI first unveiled GPT-1, AI-powered Chatbots, including ChatGPT, Google’s Bard, and Microsoft’s Bing Chat, have elevated our access to knowledge, creation of content, and even generation of images.
They assist with everything in our work and personal lives, from writing emails to answering questions about personal matters. But as they become more capable and our addiction to them grows, an intriguing question arises: Should we create some sort of “license” or certification to ensure these strong AI tools are used more responsibly?
Just as a driver licence ensures drivers have the knowledge and skills to operate vehicles safely, an AI “license” could encourage responsible use, protecting individuals and society from potential harm. Let’s explore how this could be beneficial and what it might look like.
Why Consider a “License” for AI Usage?
Spending time with AI chatbots quickly shows us how intuitive these tools are, yet they are also complex and capable of influencing opinions. Despite their general accuracy, they’re not always reliable.
Without a basic understanding of how they work and what limitations they have, users may end up relying on them in ways that could lead to misinformation, privacy risks, or even misuse.
Here are some reasons a “license” or certification might be worth considering:
1. Ensuring User Awareness of Limitations: AI chatbots are trained on vast data sets but lack human judgment, sometimes leading to incorrect or biased information. For instance, in February 2023, Google’s Bard once falsely claimed the James Webb Space Telescope captured the first image of an exoplanet; the error made headlines and impacted Google’s credibility.
2. Preventing Misinformation and Misuse: AI chatbots can potentially spread harmful content, either unintentionally or through misuse. For example, in one alarming instance, researchers posing as a 13-year-old girl found that Snapchat’s “My AI” provided inappropriate sexual advice. Additionally, the same chatbot platform has been criticised for offering guidance on concealing substance use from parents, further exposing children to potentially harmful information.
3. Protecting Privacy and Data Security: Many users have no idea how much they share with AI chatbots or the possible consequences if their AI Chatbots account is hacked. An unauthorised user could access personal data should they pilfer login credentials on a platform through the conversation history by directly asking personal questions about the victim medical condition, or etc
4. Encouraging Ethical Use: AI tools can subtly influence opinions, social interactions, and even business decisions. An AI usage license could include ethical guidelines, such as avoiding harmful or inappropriate content, respecting copyright, and being transparent about using AI-generated materials.
What Would an AI “License” Look Like?
Some would find the concept of having a license or certificate to use AI-powered chatbots extreme, particularly given these technologies are meant to be user-friendly. However, as artificial intelligence develops, the hazards connected to misuse or misinterpretation grow.
Like a driver licence lowers traffic-related accidents, an AI chatbot driver licence might help shield consumers from the possible drawbacks of this technology.
An AI chatbot specialist certificate would enable users in both professional and educational environments to acquire the knowledge required to apply these instruments responsibly and successfully. Understanding the subtleties of AI-generated content and the need for fact-checking can help students learn with AI-powered study tools or staff members using AI for customer care.
Approaching Responsible AI Use
As AI continues to evolve and integrate into our lives, taking steps to ensure responsible use is essential. While an official “license” might not be necessary for every user, an accessible certification program could help spread awareness of the technology’s benefits and risks. This would promote informed and ethical use, fostering trust and accountability in AI applications.
In the same way, we learn to navigate social media responsibly; perhaps it’s time to think about “driving” AI with an understanding of its power and limitations. Whether formalised as a license or offered as optional courses, fostering responsible AI use will help us all confidently navigate this new landscape.
Dr. Hussam Al Hamadi is the Director of the Master in Cybersecurity Program, University of Dubai