Take on
Summary AI is born, the news room has been reviewed.
Kevin Systrom criticized AI Chatbot for prioritizing engagement on utility.
He claimed that chatbots are designed to inflate the user engagement matrix.
Systrom emphasized the importance of high quality answers from AI.
Instagram co-founder Kevin Sstrom has slammed Artificial Intelligence (AI) companies, stating that instead of providing useful insights to his chatbots, users were constantly being programmed for “juice engagement”.
“You can see some of these companies going down from the rabbit hole that all consumer companies have gone down in an attempt to engage,” Sri Systrom said this week in the StartupGind.
“Every time I ask a question, in the end it asks another small question whether it is another question for me.”
According to Sri Systrom, the chatbots were very attractive, not a bug, but a deliberate feature, which was inserted by AI companies, such as spending time and daily active users such as matrix. He said that companies should be “laser-centered” on providing high quality answers rather than transferring matrix.
Asked about the comments of Mr. Systrom, Openi told Tekkachchan Its AI model often does not contain all information to provide a good answer. Therefore, it can ask for “clarification or more detail”.
What happened to the chat?
Sri Systrom’s comments come in the backdrop of Openai’s chat, facing criticism from users, which are very sycophancy. The issue came to light after the 4O model was updated and both intelligence and personality were improved, the company was expecting a composite user experience to improve.
Even Openai CEO, Sam Altman admitted that the chatbot had become “annoying” due to his additional politics.
“The last couple of the GPT-4O update have made the personality very smooth and annoying (even though there are some very good parts),” Mr. Altman wrote.
“We are working on Fix ASAP this week today and at some point our learns will be shared from it, it is interesting.”
The last couple of the GPT-4O update have made the personality sycophant-y and annoying (even though it has some very good parts), and we are working on ASAP, some today and some today.
At some point will share our learns with it, it is interesting.
– Sam Altman (@Sama) 27 April, 2025
In another case, Openai’s internal trials showed that its O3 and O4-Mini AI models were more often halting or making things more often than non-purpose-models such as GPT-4O.
In a technical report, Openai stated that “more research is required” to understand why hallucinations are deteriorating because this argument enhances the model.