OpenAI Blocks Over 250,000 Requests to Create Election Candidate Images
OpenAI, the company behind the AI chatbot ChatGPT, has rejected more than 250,000 requests to generate images of key US election candidates using its platform, DALL-E. The rejections, which were disclosed in a company blog update on Friday, were part of measures aimed at ensuring the safety and integrity of the upcoming election period.
Requests for AI-generated images of high-profile figures like president-elect Donald Trump, his vice-presidential pick JD Vance, current president Joe Biden, vice-presidential candidate Kamala Harris, and Tim Walz were all blocked. According to OpenAI, these refusals were implemented as part of “safety measures” to prevent the platform from being used to create misleading or harmful content in the lead-up to Election Day.
“These guardrails are especially important in an elections context and are a key part of our broader efforts to prevent our tools being used for deceptive or harmful purposes,” the blog post explained. The company emphasized that it had not seen evidence of any widespread influence operations in US elections through its platforms.
In addition to the image rejections, OpenAI revealed that it had taken action earlier this year against a political influence campaign linked to Iran. In August, OpenAI blocked the Iranian campaign, known as Storm-2035, from generating political content on US elections, which attempted to impersonate both conservative and progressive news outlets. Accounts tied to this campaign were subsequently banned from using OpenAI’s services.
The company also highlighted in an October update that it had disrupted more than 20 separate influence operations and deceptive networks from around the world that had attempted to use OpenAI tools for misleading purposes. However, the company’s report noted that none of these election-related operations managed to generate significant “viral engagement.”
OpenAI’s proactive steps to protect the integrity of its platforms during the election period are part of its broader efforts to minimize the potential for AI-generated content to be used maliciously or deceptively, especially in politically sensitive contexts. Despite concerns about the potential misuse of AI technologies for creating misinformation, OpenAI asserts that its safety measures are working to limit such risks.
-
Travel7 months ago
Embracing Solo Travel to Unlock Opportunities for Adventure and Growth
-
Education7 months ago
Exlplore the Top Universities in the United States for Computer Science Education?
-
Politics7 months ago
Six Best Things Done by Donald Trump as President
-
Technology7 months ago
Revolutionizing Healthcare Training with Latest Technologies
-
Health7 months ago
Rise of Liposuction: A Minimally Invasive Solution for Body Contouring
-
Business7 months ago
Thriving Startup Hubs: Best Cities in the USA for Entrepreneurship
-
Travel7 months ago
Where to Move? America’s Top Ten Most Affordable Cities
-
Health7 months ago
Digestive Power of taking Mint Tea after Meals