New ASCII art attacks can jailbreak LLMs

New ASCII art attacks can jailbreak LLMs. Anthropic releases new AI models, Elon Musk sues OpenAI, and Google is taking steps to tackle AI spam content.

What a week of events in the AI space, let's dive into them.

📰 News:

â„šī¸ ArtPrompt: ASCII Art-based Jailbreak Attacks against Aligned LLMs

🔗 https://lnkd.in/dGKu3_cD

â„šī¸ Anthropic presents the Claude 3 family. Three new large language models with impressive benchmarks.

🔗 https://lnkd.in/d8dTSPXj

â„šī¸ Here Comes The AI Worm: Unleashing Zero-click Worms

🔗 https://lnkd.in/dmqn7QNZ

â„šī¸ New ways of tackling spammy, low-quality content on Search | Google

🔗 https://lnkd.in/dC4YpYyq

â„šī¸ Elon Musk sues OpenAI for abandoning original mission for profit

🔗 https://lnkd.in/dEFzbvnK

â„šī¸ Adobe previews new generative AI tools for crafting and editing custom audio

🔗 https://lnkd.in/dtheGc99

Previous
Previous

The European Union đŸ‡ĒđŸ‡ē is the first to adopt regulations with the AI Act

Next
Next

Singapore sends people 40+ back to school to prepare them for the age of AI