|
本帖最後由 nafizcristia97 於 2024-2-29 12:27 編輯
Signatories include social media platforms X - formerly Twitter - Snap, Adobe and Meta, the owner of Facebook, Instagram and WhatsApp. Proactive However, the accord has some shortcomings, according to computer scientist Dr Deepak Padmanabhan, from Queen's University Belfast, who has co-authored a paper on elections and AI. He told the BBC it was promising to see the companies acknowledge the wide range of challenges posed by AI. But he said they needed to take more proactive action instead of waiting for content to be posted before then seeking to take it down. That could mean that more realistic AI content, that may be more harmful, may stay on the platform for longer compared to obvious fakes which are easier to detect and remove, he suggested.
Dr Padmanabhan also said the accord's usefulness was undermined because it lacked nuance when it came to defining harmful content. He gave the example of jailed Pakistani politician Imran Khan using AI to make speeches Brazil Mobile Number List while he was in prison. Should this be taken down too he asked. Weaponised The accord's signatories say they will target content which deceptively fakes or alters the appearance, voice, or actions of key figures in elections. It will also seek to deal with audio, images or videos which provide false information to voters about when, where, and how they can vote.
We have a responsibility to help ensure these tools don't become weaponised in elections," said Brad Smith, the president of Microsoft. Media caption, US Deputy Attorney General Lisa Monaco says AI could be used to "incite violence" On Wednesday, the US deputy attorney general, Lisa Monaco, told the BBC that AI threatened to "supercharge" disinformation at elections. Google and Meta have previously set out their policies on AI-generated images and videos in political advertising, which require advertisers to flag when they are using deepfakes or content which has been manipulated by AI.
|
|