OpenAI sabe que ChatGPT está causando graves problemas de salud mental a algunos usuarios. Y ya lo está «corrigiendo»

Strengthening ChatGPT’s responses in sensitive conversations | OpenAI «We worked with more than 170 mental health experts to help ChatGPT more reliably recognize signs of distress, respond with care, and guide people toward real-world support–reducing responses that fall short of our desired behavior by 65-80%.» openai.com/index/strengthening-chatgpt-responses-in-sensitive-conversations/ OpenAI sabe que ChatGPT

Seguir leyendo

Be Careful What You Tell Your AI Chatbot | Stanford HAI A Stanford study reveals that leading AI companies are pulling user conversations for training, highlighting privacy risks and a need for clearer policies. https://hai.stanford.edu/news/be-careful-what-you-tell-your-ai-chatbot?utm_source=newsletter&utm_medium=email&utm_content=Be%20Careful%20What%20You%20Tell%20Your%20AI%20Chatbot&utm_campaign=Research%2C%20News%2C%20and%20Events%20-%20October%2024%2C%202025

Be Careful What You Tell Your AI Chatbot | Stanford HAI A Stanford study reveals that leading AI companies are pulling user conversations for training, highlighting privacy risks and a need for clearer policies. hai.stanford.edu/news/be-careful-what-you-tell-your-ai-chatbot

Seguir leyendo