Quantcast
Channel: News - Crypto News Land
Viewing all articles
Browse latest Browse all 2767

Google, Microsoft, Nvidia, OpenAI Launch Coalition for AI Security

$
0
0

ChatGPT Enters the World of Bitcoin Payments, Revolutionizing Cryptocurrency Transactions

  • CoSAI, led by Google, aims to set robust AI security standards with industry leaders like Amazon and IBM.
  • Global AI safety efforts are hindered by diverse regulatory frameworks and geopolitical tensions.
  • China’s AI strategy prioritizes national security, contrasting with Western democracies’ rights-based approaches.

To address safety concerns associated to AI, the IT giants Google, Microsoft, Nvidia, and OpenAI founded the Coalition for Secure AI (CoSAI). CoSAI, which was unveiled at the Aspen Security Forum, aims to establish strict security regulations and standards for the development and use of AI. This program is a reaction to the AI field’s explosive growth.

PayPal, Amazon, Cisco, IBM, Intel, and other key players in the industry are included in the Google-led CoSAI. Using open-source methods and standardized frameworks, the team is creating secure-by-design AI systems with the goal of boosting confidence and security in AI applications. Building on their Secure AI Framework (SAIF), the release emphasized the significance of a thorough security framework for AI.

The three first workstreams will be the coalition’s main focus: creating AI security governance, equipping defenders for changing cybersecurity environments, and improving software supply chain security for AI systems.

Global Perspectives on AI Safety

Global consensus on AI safety remains tough to catch, with varying definitions, benchmarks, and regulatory approaches across nations. Democratic countries like Canada, the US, the UK, and the EU prioritize risk-based, human-centric AI governance models rooted in rights and democratic values. Despite similarities, differences persist in defining risk levels and obligations for AI developers.

Conversely, China’s approach emphasizes AI risks in terms of sovereignty, social stability, and national security. The recent Shanghai Declaration outlines China’s vision for global AI cooperation, reflecting distinct political priorities.

The Road Ahead

Efforts to enhance convergence and interoperability among diverse AI governance approaches are ongoing. While differences in AI safety definitions and practices persist, international collaboration remains crucial. China’s participation in global AI safety summits and bilateral meetings with the US demonstrates potential avenues for cooperation despite ideological disparities.

Achieving a unified global definition of AI safety faces challenges due to political and ideological differences. However, ongoing dialogue and collaborative efforts, such as CoSAI, represent pivotal steps toward establishing comprehensive AI security frameworks that transcend national borders and political systems.

Read also:

The post Google, Microsoft, Nvidia, OpenAI Launch Coalition for AI Security appeared first on Crypto News Land.


Viewing all articles
Browse latest Browse all 2767

Trending Articles