Technology & Innovation

Openai discovers evidence of the Chinese monitoring tool for Amnesty International


Openai said on Friday that it had revealed evidence that a Chinese security operation had built an artificial monitoring tool that is intelligent to collect reports in the actual time of anti -film jobs on social media services in Western countries.

The company’s researchers said they have identified this new campaign, which they called Peer Review, because someone working on the tool used Openai techniques to correct some of the computer code that supports it.

This is the first time that the company has revealed a monitoring tool that works of artificial intelligence of this type.

“Representatives of the threat sometimes give us a glimpse of what they do in other parts of the Internet because of the way they use our artificial intelligence models,” said Mr. Nemo.

There were increasing concerns that artificial intelligence could be used for monitoring, computer piracy, information disposal and other malicious purposes. Although researchers like Mr. Nimmo say technology can definitely enable these types of activities, they add that artificial intelligence can also help in determining this behavior and stopping it.

Bin Namo, Main Detective in Openai.credit…Alexander Cogen for the New York Times

Mr. Nimmo and his team believed that the Chinese monitoring tool is based on Llama, which is the technology of Meta, which opens its technology, which means that it has shared its work with program developers around the world.

In a detailed report on the use of artificial intelligence for harmful and misleading purposes, Openai also said it revealed a separate Chinese campaign called discontent sponsored, which used Openai techniques to generate English language jobs that criticized Chinese dissidents.

Openai said that the same group used the company’s technologies to translate articles into Spanish before distributing them in Latin America. Articles criticized American society and politics.

The report said that Openai researchers have identified a campaign, and it is believed to be based in Cambodia, which used the company’s technologies to generate and translate social media comments that helped pay a fraud known as “pig slaughtering”. Comments created from artificial intelligence were used to attract men on the Internet and form an investment scheme.

(The New York Times filed a lawsuit against Openai and Microsoft for violating copyright for news content related to artificial intelligence systems. Openai and Microsoft denied these claims.)

Leave a Reply

Your email address will not be published. Required fields are marked *