18 countries have entered into an agreement on the safety of artificial intelligence technology

by alex

The United States, Britain and more than a dozen other countries have unveiled the first detailed international agreement on how to protect artificial intelligence from being used for malicious purposes. The parties called on companies to create AI systems that are “safe by design.”

In the 20-page document, 18 countries agreed that companies developing and using AI must design and implement it in a way that protects customers and the general public from misuse. The agreement is non-binding and contains mostly general recommendations such as monitoring AI systems for abuse, protecting data from tampering and vetting software providers.

Countries that signed the document are considering how to protect AI technology from being hacked by hackers. The list also includes recommendations such as releasing AI models only after appropriate security testing.

READ
Google will add three new generative AI features to Chrome with the next update

However, the paper does not address pressing questions about the proper use of AI or how the data used in these models is collected.

Note that the rise in popularity of AI has raised many concerns, including its possible use to undermine democratic processes or fraud, the potential for dramatic job losses, and other potential harms to humanity.

In addition to the United States and Great Britain, the document was also signed by Germany, Italy, the Czech Republic, Estonia, Poland, Australia, Chile, Israel, Nigeria and Singapore.

Vacancies

Electron.js developer Axon, Kiev, salary 2500

Accountant (PDV) Nova Poshta Global, Kiev

Reception administrator simpals

You may also like

Leave a Comment