OpenAI Reported ChatGPT Use by Threat Actors to Influence Elections

Photo of author

By Muhammad Hussain

OpenAI released a report saying the use of ChatGPT by threat actors to influence elections but with little viral engagement.

Key Takeaways

  • Open AI says threat actors continuously attempt to use ChatGPT to influence elections.
  • It also blocks the malicious actors’ accounts after identification from all over the world.
  • The company also said that none of the malicious activities got huge viral engagement.
OpenAI logo

On Wednesday, OpenAI published a 54-page report saying it sees multiple attempts to use its AI model ChatGPT by threat actors to influence elections via fake long-form website articles, social media posts, and comments. 

OpenAI is a popular platform that also maintained its position after a $6.6 billion investment round. OpenAI obtained a $4 billion revolving line of credit, boosting its liquidity to more than $10 billion. 

This popularity also attracts such malicious activities, creating chaos in the environment, but OpenAI also handles such matters carefully and quickly as addressed.

The ChatGPT creator said,

 “It disrupted “more than 20 operations and deceptive networks from around the world that attempted to use our models.” 

Recent investigations have revealed that several state-affiliated hacker groups, including those funded by North Korea and Iran, attempted to use AI to develop sophisticated social engineering operations and influence public opinion.

These groups allegedly employed AI to create content for spear-phishing emails, conduct surveillance, and evade cybersecurity protections. 

In July, it also suspended several Rwandan accounts that were being used to post comments on social media site X on that country’s elections.

The company stated that most of the recognized postings had few or no likes, shares, and comments. In late August, an Iranian operation generated social media comments and “long-form articles” about the U.S. election and other issues using OpenAI’s technologies.

Likely, an Israeli company also employed ChatGPT in May to create social media commentary around the Indian elections. According to OpenAI, the issue was resolved in less than a day.

In June, OpenAI revealed the shutdown of a clandestine operation that generated commentary on the French European Parliament elections, as well as American, German, Poland, and Italian politics. The company claimed that the majority of the social media posts it discovered had few likes or shares, but some real people also commented on AI-generated posts.

OpenAI decreases the tension by saying,

“None of the activities that attempted to influence global elections drew viral engagement or sustainable audiences.”

The ongoing AI era is not just introducing malicious activities but also transforming manual life into automation. Now, with our work life, our drives are also going to be AI-centric. Be ready to enjoy electric drives as Uber launched an AI assistant powered by OpenAI’s GPT-4 technology.

For more AI, cyber security, and digital marketing insights, visit Daily Digital Grind

If you’re interested in contributing, check out our Write for Us page to submit your guest posts!