July 25, 2024

New AI Safety Coalition and Proposed Technical Regulations

Google leads a new coalition to address AI security risks. A group of tech giants, including Amazon, Microsoft, and OpenAI, has formed the Coalition for Secure AI (CoSAI) to tackle pressing issues like software supply chain security and AI security governance.

Current AI risks, like misinformation and deepfakes, demand immediate attention. While the long-term threats of AGI and ASI are significant, the proliferation of misleading content poses an urgent challenge.

Two potential technical solutions are proposed:

  1. Embedding-based tracking: By analyzing the underlying mathematical representations (embeddings) of AI-generated content, it’s possible to identify the source of misinformation or deepfakes. This involves creating a database of embeddings from various AI tools to enable tracing and potential prevention.
  2. Penalty-tuning for LLMs: To hold AI models accountable for harmful outputs, a system of penalties can be implemented. This could involve restricting access to computing resources or slowing down response times for models that generate misinformation or deepfakes.

These proposals aim to enhance AI safety by providing technical mechanisms to detect and mitigate risks.

you might also like…
Jul 18, 2024

OpenAI’s “Strawberry” Project: A Revolution in AI Research

Google leads a new coalition to address AI security risks. A group of tech giants, including Amazon, Microsoft, and OpenAI,... Read more

Aug 5, 2024

OpenAI Unveils SearchGPT: A New Era of AI-Powered Search

Google leads a new coalition to address AI security risks. A group of tech giants, including Amazon, Microsoft, and OpenAI,... Read more

Contact Us

  • Contact Details

    +380 63 395 42 00
    team@mindcraft.ai
    Krakow, Poland

    Follow us