Google leads a new coalition to address AI security risks. A group of tech giants, including Amazon, Microsoft, and OpenAI, has formed the Coalition for Secure AI (CoSAI) to tackle pressing issues like software supply chain security and AI security governance.
Current AI risks, like misinformation and deepfakes, demand immediate attention. While the long-term threats of AGI and ASI are significant, the proliferation of misleading content poses an urgent challenge.
Two potential technical solutions are proposed:
- Embedding-based tracking: By analyzing the underlying mathematical representations (embeddings) of AI-generated content, it’s possible to identify the source of misinformation or deepfakes. This involves creating a database of embeddings from various AI tools to enable tracing and potential prevention.
- Penalty-tuning for LLMs: To hold AI models accountable for harmful outputs, a system of penalties can be implemented. This could involve restricting access to computing resources or slowing down response times for models that generate misinformation or deepfakes.
These proposals aim to enhance AI safety by providing technical mechanisms to detect and mitigate risks.