Copilot Chat bug bypasses DLP on ‘Confidential’ email

Copilot Chat bug bypasses DLP on ‘Confidential’ email • The Register Data Loss Prevention? Yeah, about that… www.theregister.com/2026/02/18/microsoft_copilot_data_loss_prevention/ El fallo de Microsoft 365 Copilot Chat que dejó asomar correos “confidenciales” y lo que enseña sobre la IA en la oficina wwwhatsnew.com/2026/02/21/el-fallo-de-microsoft-365-copilot-chat-que-dejo-asomar-correos-confidenciales-y-lo-que-ensena-sobre-la-ia-en-la-oficina/

Seguir leyendo

Qué es Clawdbot o Moltbot, qué puede hacer en tu ordenador y cuáles son sus peligros

L’assistant IA OpenClaw est un cauchemar de sécurité – Le Monde Informatique www.lemondeinformatique.fr/actualites/lire-l-assistant-ia-openclaw-est-un-cauchemar-de-securite-99246.html Moltbot, el asistente de IA con alma de langosta que promete “hacer cosas” y plantea preguntas incómodas wwwhatsnew.com/2026/01/30/moltbot-el-asistente-de-ia-con-alma-de-langosta-que-promete-hacer-cosas-y-plantea-preguntas-incomodas/ Le phénomène Moltbot inquiète les experts en cybersécurité – Le Monde Informatique www.lemondeinformatique.fr/actualites/lire-le-phenomene-moltbot-inquiete-les-experts-en-cybersecurite-99215.html Qué es Clawdbot, qué puede hacer

Seguir leyendo

Users prompt Grok AI chatbot to make photos dirty, apologize

Users prompt Grok AI chatbot to make photos dirty, apologize • The Register www.theregister.com/2026/01/03/elon_musk_grok_scandal_underwear_strippers_gross/ Grok, el chatbot de Elon Musk, de nuevo en el centro de la polémica por desnudar a mujeres con IA www.20minutos.es/tecnologia/inteligencia-artificial/grok-chatbot-elon-musk-centro-polemica-desnudar-mujeres-ia_6916632_0.html Grok is undressing anyone, including minors | The Verge www.theverge.com/news/853191/grok-explicit-bikini-pictures-minors Grok, la IA de Musk,

Seguir leyendo

Be Careful What You Tell Your AI Chatbot | Stanford HAI A Stanford study reveals that leading AI companies are pulling user conversations for training, highlighting privacy risks and a need for clearer policies. https://hai.stanford.edu/news/be-careful-what-you-tell-your-ai-chatbot?utm_source=newsletter&utm_medium=email&utm_content=Be%20Careful%20What%20You%20Tell%20Your%20AI%20Chatbot&utm_campaign=Research%2C%20News%2C%20and%20Events%20-%20October%2024%2C%202025

Be Careful What You Tell Your AI Chatbot | Stanford HAI A Stanford study reveals that leading AI companies are pulling user conversations for training, highlighting privacy risks and a need for clearer policies. hai.stanford.edu/news/be-careful-what-you-tell-your-ai-chatbot

Seguir leyendo