In the wake of global elections in over 50 countries, OpenAI, the San Francisco-based artificial intelligence startup, has unveiled a comprehensive plan to prevent the misuse of its generative AI tools for spreading election misinformation. The safeguards, outlined in a blog post, encompass a combination of existing policies and new initiatives to address the potential misuse of its highly popular AI tools, known for rapidly generating text and images.
To curb the risk of misleading messages and convincing fake photographs, OpenAI has committed to banning the use of its technology for creating chatbots that impersonate real candidates or governments. Additionally, it aims to prevent the misrepresentation of voting processes and discourage people from voting. The company has also taken a cautious stance, announcing a temporary prohibition on users building applications for political campaigning or lobbying until further research can be conducted on the persuasive power of its technology.
In a move to enhance transparency and traceability, OpenAI will introduce digital watermarks on AI-generated images produced using its DALL-E image generator. This watermark will permanently embed information about the image’s origin, facilitating the identification of content created with the AI tool across the web.
Also Read : Samsung brings Galaxy S24 series with AI features, titanium frame and more
OpenAI is collaborating with the National Association of Secretaries of State to guide users of its ChatGPT tool to accurate information about voting on the nonpartisan website CanIVote.org. This partnership aims to ensure that users seeking information on voting logistics are directed to reliable sources, thereby minimizing the potential for misinformation.
While these initiatives have been lauded as positive steps in the fight against election misinformation, questions remain about their effectiveness in implementation. Mekela Panditharatne, counsel in the democracy program at the Brennan Center for Justice, emphasized the importance of the comprehensiveness of filters in flagging questions related to the election process and the potential for items slipping through the cracks.
OpenAI acknowledges the need for constant vigilance and plans to closely monitor the situation throughout the year. CEO Sam Altman expressed the company’s commitment to staying vigilant and ensuring the effectiveness of the safeguards, recognizing the challenges posed by the dynamic nature of AI-generated content.
(With inputs from AP)
Unlock a world of Benefits! From insightful newsletters to real-time stock tracking, breaking news and a personalized newsfeed – it’s all here, just a click away! Login Now!
Published: 18 Jan 2024, 11:41 AM IST