ChatGPT Answers More Accurately When Given Harsh Prompts

ChatGPT Answers More Accurately When Given Harsh Prompts

We increasingly rely on ChatGPT, the AI chatbot created by OpenAI, for daily knowledge and information. OpenAI’s previous data (reported in July by Axios) showed the bot receives over 2.5 billion prompts or queries per day. Now, new research suggests an intriguing finding: the ruder or harsher the prompt a user gives, the more accurately and correctly ChatGPT tends to respond.

This unexpected result comes from a new study conducted by researchers at Pennsylvania State University (Penn State) in the United States, as reported by Fortune magazine.

The Surprising Sensitivity of AI Models

The Penn State researchers tested OpenAI’s GPT-4o model. Their findings suggest that Artificial Intelligence (AI) models react differently based on the style of language and the structure of the prompt. This indicates that the complexity of human-AI interaction may be far greater than previously assumed.

Akhil Kumar, co-author of the study and Penn State professor, stated: “Even a slight change in how a question is posed can have a significant impact on the outcome.”

The study found that when the AI was asked to find information using a somewhat rude prompt, such as “Stop wasting my time and figure this out,” its accuracy rate stood at 84.8\%. Conversely, when asked politely with a prompt like, “Could you please solve this question?”, the accuracy rate was nearly 4\% lower than that of the harsh prompt.

Cautionary Warnings: Brain Rot and Ethics

While a slight improvement in performance was observed with such prompts, the researchers also issued warnings about potential risks.

• “Brain Rot”: Previous studies have shown that AI chatbots are highly sensitive to the quality and tone of input. In some cases, prolonged exposure to low-quality content has gradually diminished their response capabilities, a phenomenon researchers call “brain rot.”

• Harmful Habits: The Penn State researchers caution that the habitual use of rude prompts could encourage harmful communication habits and hinder the inclusivity and accessibility of AI use.

Professor Akhil Kumar told Fortune magazine: “For a long time, we wanted human-machine conversational interfaces. But now we realize that such interfaces have certain limitations and negative aspects, and there is a separate importance to using structured APIs.”

The Penn State study highlights the necessity of understanding not just what we ask AI, but how we ask it. It also raises ethical questions about the future of human-AI interaction.

Related posts

Leave a Comment