LLM-generated passwords ‘fundamentally weak,’ experts say • The Register Seemingly complex strings are actually highly predictable, crackable within hours www.theregister.com/2026/02/18/generating_passwords_with_llms/
Seguir leyendoCategoría: Security
Copilot Chat bug bypasses DLP on ‘Confidential’ email
Copilot Chat bug bypasses DLP on ‘Confidential’ email • The Register Data Loss Prevention? Yeah, about that… www.theregister.com/2026/02/18/microsoft_copilot_data_loss_prevention/ El fallo de Microsoft 365 Copilot Chat que dejó asomar correos “confidenciales” y lo que enseña sobre la IA en la oficina wwwhatsnew.com/2026/02/21/el-fallo-de-microsoft-365-copilot-chat-que-dejo-asomar-correos-confidenciales-y-lo-que-ensena-sobre-la-ia-en-la-oficina/
Seguir leyendoAI agents can spill secrets via malicious link previews
AI agents can spill secrets via malicious link previews • The Register Zero-click prompt injection can leak data when AI agents meet messaging apps, researchers warn www.theregister.com/2026/02/10/ai_agents_messaging_apps_data_leak/
Seguir leyendoOpenClaw and its dangers
Team9 – Bring OpenClaw AI Agent to Your Team | Part of Moltbook Ecosystem team9.ai/ It’s easy to backdoor OpenClaw, and its skills leak API keys • The Register www.theregister.com/2026/02/05/openclaw_skills_marketplace_leaky_security/ OpenClaw es la IA más viral, fascinante y peligrosa del momento. Por eso último se ha aliado con la malagueña
Seguir leyendoClaude Code ignores ignore rules meant to block secrets
Claude Code ignores ignore rules meant to block secrets • The Register Developers remain unsure how to prevent access to sensitive data www.theregister.com/2026/01/28/claude_code_ai_secrets_files/
Seguir leyendoQué es Clawdbot o Moltbot, qué puede hacer en tu ordenador y cuáles son sus peligros
L’assistant IA OpenClaw est un cauchemar de sécurité – Le Monde Informatique www.lemondeinformatique.fr/actualites/lire-l-assistant-ia-openclaw-est-un-cauchemar-de-securite-99246.html Moltbot, el asistente de IA con alma de langosta que promete “hacer cosas” y plantea preguntas incómodas wwwhatsnew.com/2026/01/30/moltbot-el-asistente-de-ia-con-alma-de-langosta-que-promete-hacer-cosas-y-plantea-preguntas-incomodas/ Le phénomène Moltbot inquiète les experts en cybersécurité – Le Monde Informatique www.lemondeinformatique.fr/actualites/lire-le-phenomene-moltbot-inquiete-les-experts-en-cybersecurite-99215.html Qué es Clawdbot, qué puede hacer
Seguir leyendoSinaptic.AI: Data Protection Extensio
Sinaptic.AI – Data Protection Extension | Prevent PII Leakage to AI Tools sinaptic.ai/ Secure Your AI Workflow Prevent accidental leakage of sensitive PII and PHI to ChatGPT, Claude, Gemini and other AI services. Real-time detection, local processing, and enterprise-grade security.
Seguir leyendoUsers prompt Grok AI chatbot to make photos dirty, apologize
Users prompt Grok AI chatbot to make photos dirty, apologize • The Register www.theregister.com/2026/01/03/elon_musk_grok_scandal_underwear_strippers_gross/ Grok, el chatbot de Elon Musk, de nuevo en el centro de la polémica por desnudar a mujeres con IA www.20minutos.es/tecnologia/inteligencia-artificial/grok-chatbot-elon-musk-centro-polemica-desnudar-mujeres-ia_6916632_0.html Grok is undressing anyone, including minors | The Verge www.theverge.com/news/853191/grok-explicit-bikini-pictures-minors Grok, la IA de Musk,
Seguir leyendoAI browsers are clicking the same scams you’d never fall for
AI browsers are clicking the same scams you’d never fall for www.makeuseof.com/ai-browsers-click-scams-youd-never-fall-for/
Seguir leyendoBe Careful What You Tell Your AI Chatbot | Stanford HAI A Stanford study reveals that leading AI companies are pulling user conversations for training, highlighting privacy risks and a need for clearer policies. https://hai.stanford.edu/news/be-careful-what-you-tell-your-ai-chatbot?utm_source=newsletter&utm_medium=email&utm_content=Be%20Careful%20What%20You%20Tell%20Your%20AI%20Chatbot&utm_campaign=Research%2C%20News%2C%20and%20Events%20-%20October%2024%2C%202025
Be Careful What You Tell Your AI Chatbot | Stanford HAI A Stanford study reveals that leading AI companies are pulling user conversations for training, highlighting privacy risks and a need for clearer policies. hai.stanford.edu/news/be-careful-what-you-tell-your-ai-chatbot
Seguir leyendo