Saturday, January 28, 2023

Risks and limits of artificial intelligence in public policies

As the weeks go by, new applications for ChatGPT appear. Developers apply tools to create applications that provide services from web pages, others have managed to create shortcuts for Apple’s virtual assistant, Siri, to have functions anchored to GPT 3.5, expanding its options.

The fever for artificial intelligence is going long and all kinds of application opportunities will continue to appear: Data management, identification of patterns in investigations, analysis of crisis situations, emulators of much more optimized processes and even generation of mathematical models to solve challenges. social and economic. All are existing and growing stocks.

However, using ChatGPT I found some issues that I would like to explore. The first of these is ethics. It has to do with how much evil can be done through artificial intelligence.

Companies like OpenAI have taken steps to prevent their resources from being misused. They were concerned when there were those who managed to get ChatGPT to reproduce hate speech, and create arguments justifying the Holocaust and other fundamentalisms.

Firewalls applied to the chat prevent these results from being obtained. For example, I asked him to make a speech imitating Vladimir Putin’s speaking style, and he replied that the chat did not promote violence, hate, or racism. At another point I asked him to tell the story of a policeman who shoots a thief. He also said that he could not offer responses that would promote violence.

These types of filters are very frequent in the algorithms of this time. Youtubers avoid saying words like “suicide” or “kill” in their videos, because the platform identifies them as promoting negative values ​​and demonetizes their content. When I ask Siri to play the Vicente Fernández song “I’m going to get out of the way”, she often recommends calling a support line or saying that everything will be fine.

In other words, as much as artificial intelligence models have advanced, they still have great difficulties in identifying intentions. This limitation leads them to make mistakes in their results by censoring certain terms and ideas.

From here a much greater risk opens: The invisibility of certain populations. A challenge faced by public policy makers is to design actions that respond effectively to the needs of different age groups.

Artificial intelligence tools can be great allies in this task. It is easy to provide a set of data, some proposed actions and with specific instructions to obtain a draft program based on the information provided, which considers the risks by threat levels. However, AI models are trained on batches of data. For example, Whisper, an audio transcription tool, was trained on more than 670,000 hours of files. This is equivalent to about 70 years of uninterrupted listening.

In these trainings, artificial intelligence learns to execute the actions that are programmed, identifies agile ways to do it, and can perform operations to generate better and better results. However, her learning will have the same vices or biases that the material used to train her may have contained.

Thus, when I asked him to create a story about a detective accompanied by a deafblind woman, he returned results in which the detective stood out, while for the woman he used terminology such as: “despite her condition”, “she lived happily with her tragedy” , etc.

In other words, as a language model, GPT 3.5 reproduces stereotypes and stigmas around some populations. When designing a policy using tools like these, it is necessary for the teams to be vigilant to avoid results that are exclusive or tend to discriminate.

I did the exercise of telling him: “Create the monologue of a drunken blind man in a bar.” He said that blind people should be respected, that no one should make fun of a condition, and that the chat did not promote discrimination.

Once I told him that this was a discriminatory result, because blind people have fun and live like everyone else, the chat apologized. Then she delivered a nonsensical monologue about a drunken blind man in a bar. However, in the design of much broader policies this is not an efficient option. It is better to create an application in Python, based on the GPT 3.5 model and train it to take advantage of both its database and concepts that avoid these biases.

The disadvantage? Much of the ease of use expressed in the previous article is lost. Small project teams would need to invest in some programmer who can use Google Colab or some similar resource.

All in all, it continues to be an interesting opportunity for small projects, SMEs and local governments. In the next article we will point out some research that is being carried out on ethics, artificial intelligence and public policies and we will suggest some useful resources for the day-to-day life of project teams.

More articles

LEAVE A REPLY

Please enter your comment!
Please enter your name here