-
‘Wordle’ Today #1,364 Hints, Clues and Answer for Friday, March 14 Game - 13 mins ago
-
Villanova Wildcats vs. UConn Huskies Big East Tournament Highlights | FOX College Hoops - 15 mins ago
-
3/13: CBS Evening News – CBS News - 44 mins ago
-
‘Connections’ March 14: Hints and Answers for Puzzle #642 - 53 mins ago
-
Samson Johnson finishes tough and-1 layup, extending UConn's lead over Villanova - 58 mins ago
-
DeMarcus Lawrence Takes Major Shot at Cowboys After Signing With Seahawks - 2 hours ago
-
Stephen Curry becomes first NBA player in history to make 4K career 3-pointers - 2 hours ago
-
4.4 magnitude earthquake shakes Italy leaving residents in fear - 2 hours ago
-
Videos Show Plane on Fire at Denver International Airport: What We Know - 2 hours ago
-
Lionel Messi’s goal helps Inter Miami advance to Concacaf Champions Cup quarterfinals - 2 hours ago
AI-driven content moderation on social media raises user concerns
A recent study by the media regulator NMHH shows how AI moderation is shaping social media and causing uncertainty.
The NMHH has commissioned a study that sheds light on how Facebook and YouTube moderate content and restrict accounts. According to study author Zsolt Ződi, large platforms use artificial intelligence (AI) to make millions of sanction decisions every month. These are often implemented without prior human review. Although the EU Digital Services Act (DSA) requires transparency in these processes, users are usually only inadequately informed about the reasons, if at all. Although those affected can lodge an objection, the complaints procedures are mostly automated, meaning that blocks are rarely lifted.
The major platforms do not publish country-specific data on their moderation practices. The National University of Administration (NKE) therefore conducted a representative survey. According to this study, around 15% of respondents – almost half a million Hungarians in total – have already had their content deleted or restricted. This represents an increase of five points compared to previous years. Half of those affected had even been blocked several times, while a quarter of accounts were blocked permanently. Only 10% of the blocked posts and accounts were subsequently restored.
Apart from clearly illegal content, YouTube even allows the removal of content that ‘could cause harm’ to the service. Typical reasons for blocking are spam and fake accounts. Hundreds of thousands of posts on hate speech and disinformation are also restricted. In many cases, however, posts are also shadow banned without the user’s knowledge by not appearing in the feeds of others. The study emphasises that although online platforms have become indispensable, they enable significantly more human contact and therefore need to employ considerably more staff.
Source link