Meta, chatbot and Sen. Hawley
Digest more
The chief executive of artificial intelligence chatbot maker Character.ai believes most people will have “AI friends” in the future, as it faces a string of lawsuits over alleged harm to children and advocacy groups call for a ban on “companionship” apps.
People who interact with AI more than colleagues may end up eroding the social skills needed to climb the corporate ladder, a psychologist warns.
Several AI companions have recently hit the market, including ElliQ, which normally costs $59 a month to use. Smola received the chatbot for free with funding from a federal grant.
5d
CNET on MSNHow to Ask AI a Question Using Chatbots
Skip over Google for those random questions that pop into your head all day long and see if AI can answer them instead. Here are some tips to get the best results.
While most people can use chatbots without issue, experts say a small group of users may be especially vulnerable to delusional thinking after extended use. Some media reports of AI psychosis note that individuals had no prior mental health diagnoses, but clinicians caution that undetected or latent risk factors may still have been present.
Kids crave approval from their peers. Chatbots offer an alternative to IRL relationships, but they can come at a price
New types of cuddly toys, some for children as young as 3, are being sold as an alternative to screen time — and to parental attention.
1dOpinion
New York Magazine on MSNMaybe AI Chatbots Shouldn’t Flirt With Children
This full-steam-ahead push into AI companionship by an established social media company is in its early stages, and Meta is still in the process of figuring out how to build, tune, and deploy its AI companions. This week, Reuters got hold of some of the materials Meta is purportedly using to do so:
4dOpinion
PCMag on MSNGPT-5 Is Supposed to Be Smarter, But It Just Makes Me Want to Switch Chatbots
Although GPT-5 delivers on some of what I wanted, it doesn’t solve the problems that actually matter. OpenAI needs to make swift and thorough changes to convince me to stay.
If you've interacted with an artificial intelligence chatbot, you've likely realized that all AI models are biased. They were trained on enormous corpuses of unruly data and refined through human instructions and testing.
A viral TikTok saga about a woman and her psychiatrist is one of several recent incidents to spark online discourse about people relying on chatbots to inform their truth.