OpenAI sees continued attempts by threat actors to use its models for election influence

OpenAI has seen a number of attempts where its AI models have been used to generate fake content, including long-form articles and social media comments, aimed at influencing elections, the ChatGPT maker said in a report on Wednesday.

Cybercriminals are increasingly using AI tools, including ChatGPT, to aid in their malicious activities such as creating and debugging malware, and generating fake content for websites and social media platforms, the startup said.

Read more

You may also like

Comments are closed.

More in IT