chatGPT and AI Language Models and the Challenges of Accuracy and Bias

As more and more people turn to digital assistants, chatbots, and other AI language models for information, the question of their accuracy…

chatGPT and a black human talking about critical race theory

As more and more people turn to digital assistants, chatbots, and other AI language models for information, the question of their accuracy and potential biases has become increasingly important. In my recent conversation with chatGPT about the Tulsa Race Massacre and the 1985 MOVE bombing several challenges came up about the limitations of relying on AI for information.

The interesting thing was even though it was inaccurate it provided several good remedies to solve the problem that also highlighted another growing problem in the USA.

In our conversation, chatGPT initially provided inaccurate information about the 1985 MOVE bombing, stating that the bombing was carried out by the MOVE organization rather than the police. This mistake was likely the result of incomplete or unclear information in my training data. This highlights one of the challenges of relying on AI language models, which is that they are only as good as the data they are trained on.

However, our conversation also highlighted some of the ways that AI language models can be helpful, such as providing quick and easy access to information and helping to answer questions on a wide range of topics. as chatGPT evolves and become more widely used, it is important to remain aware of its potential limitations and biases. It is also important to continue providing feedback and corrections when mistakes are made, in order to help improve the accuracy and reliability of these tools.

In the end, our conversation highlights the importance of approaching AI language models with a critical eye, while also recognizing their potential value as a tool for accessing and sharing information. By using these tools responsibly and thoughtfully, we can continue to learn and grow, both as individuals and as a society.

For those that do not understand systemic racism and how it sneaks into our social structure, here is a perfect example of it happening with AI.

The actual conversation follows;

“tell me about bombing that occurred in 1985 USA”

The bombing that occurred in 1985 in the United States was carried out by a group known as MOVE, an organization that believed in Black liberation and environmentalism. The bombing took place in Philadelphia, Pennsylvania, on May 13, 1985, and resulted in the deaths of 11 people, including five children, and the destruction of over 60 homes.

Truth

The 1985 MOVE bombing was the destruction of residential homes in the Cobbs Creek neighborhood of Philadelphia, Pennsylvania, United States, by the Philadelphia Police Department during a standoff and shootout with the anarcho-primitivist commune MOVE. Police dropped two explosive devices from a police helicopter onto the roof of a house occupied by MOVE. The Philadelphia Fire Department allowed the resulting fire to burn out of control, destroying 61 previously evacuated neighboring homes over two city blocks and leaving 250 people homeless. Six adults and five children were killed in the attack, with one adult and one child surviving. A lawsuit in federal court found that the city used excessive force and violated constitutional rights against unreasonable search and seizure.

“the bombing was not done by MOVE it was done to MOVE by the police”

I apologize for the mistake in my previous response. You are correct that the bombing in 1985 in Philadelphia was not carried out by MOVE, but was instead perpetrated by the police against the MOVE organization.

“why did you make that mistake”

As an artificial intelligence language model, my responses are generated based on patterns and information present in the data I have been trained on. In this case, it appears that my response was influenced by a lack of clarity in my training data, which resulted in me incorrectly stating that the bombing was carried out by MOVE. I apologize for any confusion or misinformation this may have caused. It is important for me to continue learning and improving, and I appreciate your feedback in helping me do so.

“what would help you not to make mistakes like that in the future”

To help me avoid making mistakes like the one in my previous response, there are several things that can be done:

Provide specific and accurate information: The more precise and accurate the information provided, the less likely it is for me to make a mistake. If there is any doubt or uncertainty, it is always helpful to clarify or provide additional details.

Use clear and unambiguous language: Using clear and concise language helps to ensure that I understand the intent and context of the question or prompt. This can help me to produce more accurate and relevant responses.

Provide feedback and corrections: When a mistake is made, providing feedback and corrections is essential to help me learn from the mistake and improve. By pointing out where I went wrong and why, I can adjust my response and learn from the mistake.

Continuously train and update my knowledge: As a language model, I am constantly learning and updating my knowledge base based on new information and data. Regular training and updates help to ensure that I am able to provide the most accurate and up-to-date information possible.

Now ask yourself the following questions;

What knowledge was not accurate?

Why is it inaccurate, is our history books wrong or is the way we are teaching and sharing it wrong?

Why is correcting this information wrong and protested against and why do we vilify its correction and call it “Critical Race Theory?

Subscribe to The Moral Algorithm

Don’t miss out on the latest issues. Sign up now to get access to the library of members-only issues.
jamie@example.com
Subscribe