TikTok will be employing AI to help remove violative content on its platform, but is it enough? We asked three experts to weigh in.
TikTok is evolving, the platform is now only four years old but it has over one billion users and is available in over 150 countries. It even beat out Youtube, Instagram, WhatsApp and Facebook Messenger in terms of downloads, racking up 33 million in a single quarter in 2019, according to Sensor Tower.
So even if you’re not on TikTok, it’s likely that at least one person you know is. Read on to find out how the platform is looking to become safer for its users and what experts think about the recent changes.
What is TikTok doing to keep its users safe?
TikTok announced recently that it would begin using technology to automatically remove some types of violative content that would be identified upon being uploaded, in addition to the content removed manually by its Safety Team.
TikTok claims that automation will be reserved for content categories where its technology has the highest accuracy and will be starting on violations concerning the safety of minors, adult nudity, sexual activities, violent and graphic content and illegal activities and regulated goods.
The company also claims it will try to continue to improve the precision of the technology to avoid incorrect removals, however, creators can appeal their video removal directly if they feel it was done so unfairly.
The new system counts violations made by a user, making note of the severity and frequency of any violations. The user will be notified of the consequences, which can be found in the Account Updates section in their Inbox.
A user’s first violation will result in a warning unless the violation is a zero-tolerance policy, in which case they will be banned. After the first violation, the user will have their account suspended for 24 to 48 hours, depending on the severity, at which time they will not be able to upload video, comment, or edit their profile.
After several violations, the user will be notified that their account is close to being banned, and if the behaviour continues then the account will be permanently removed.
Is automation more effective for removing violative content?
“Yes. Prior to this, human moderators had to confirm a violation before something was removed. Humans are slow and manpower is limited, which might allow a violating piece of content to spread before it’s removed, or not be removed at all,” Paul Bischoff, Privacy Advocate at Comparitech, told Trusted Reviews.
“TikTok is removing the human bottleneck from more clear-cut cases in which its automated systems are reasonably confident that a violation has occurred.”
An AI system would be able to work faster than a human team, and Chris Hauk also believes automated systems are more accurate.
“Automated systems are not subject to human error or fatigue. An automated system can work 24/7 to remove a violating piece of content. Automated systems are also not likely to remove (or not to remove) a piece of content, simply because of a human’s opinion as to what may or may not be in violation of standards,” Chris Hauk, Consumer Privacy Champion at Pixel Privacy, explained to Trusted Reviews.
Will the AI also accidentally remove non-violative content?
“Inevitably it will. There will be teething problems for sure that will cause some violent content to slip through and also for false positives to occur. It will take time for the TikTok algorithms to become refined enough to avoid these video content being misclassified,” Tom Gaffney, Security Consultant for F-Secure, told Trusted Reviews.
“Longer-term, it will be interesting to see how content is classified in different regions. In this instance, the policy has been experimented on using US & Canadian subjects but social norms do differ around the globe so there will be some mistakes. I have some sympathy with TikTok on this matter as they are a commercial entity being asked to “police” Internet content,” Gaffney went on to say.
It is not unusual for a social media platform to use AI to help remove violative or offensive content, The Verge reported on Facebook’s use of AI to sort content for quicker moderation in 2020.
However, due to the automated system being relatively new for TikTok, Bischoff agreed that the platform might struggle in correctly identifying content worthy of removal.
“It’s inevitable that an automated system like this will occasionally misidentify something as a violation. Under the new system, content can be removed without human oversight. TikTok will try to minimize false positives to a reasonable degree and still allows for appeals of automatic removals,” Bischoff explained.
You might like…
It’s believed that the system will need to be ‘taught’ what it should and shouldn’t remove, though these systems are not always perfect. During the pandemic, YouTube was guilty of removing video content that did not go against the companies policy, as measures to protect staff meant the AI was not being aided by human review, as reported by Tech Crunch.
What can users do to avoid seeing violative content?
“TikTok has a number of tools to help parents limit exposure of children from seeing the content. Parents can ensure children use the appropriate age settings on the app or for best protection use the ‘pairing feature’, which allows parents more granular controls,” Gaffney explained to Trusted Reviews.
Hauk also claims that reporting content you don’t want to interact with will help the algorithm to better understand your preferences and what should and shouldn’t be allowed on the site.
“[Users should] report content that they feel violates the platform’s community standards. While the content may not immediately be removed, reports help the automated system learn and improve,” Hauk said.
A user can report videos and accounts on TikTok by following these instructions:
To report a video on TikTok:
- Go to the video you want to report
- Tap the Share button
- Select Report and follow the instructions provided on the screen
To report an account on TikTok:
- Go to the profile you want to report
- Tap the three dots on the upper right side of the screen to see more options
- Tap the Report button
- Select Report Account from the options
- Follow the on-screen instructions to describe what the problem is
Should TikTok be doing more?
“While all social networks should be concerned over violative content, they need to be careful to not restrict free speech. Protective measures can quickly turn into censorship if content removal isn’t performed fairly and without prejudice,” Hauk commented.
Bischoff also claimed that the new suspension system should be effective, even if suspended or banned users created another account.
“Suspensions are effective in some instances and are a necessary part of preventing people from abusing the platform. Even though someone could just create a new account, they would still have to rebuild their follower base,” Bischoff explained.
TikTok has not commented on any feature that would suspend or ban accounts that are linked to each other, however, a user would have to rebuild their following if they chose to make another account after being suspended.
“This is a pretty good pro-active step by TikTok, putting them ahead of many of the other social media companies. Perhaps the question should be ‘what more should governments be doing to enable all social media companies to offer similar services?’,” Gaffney went on to say.
“TikTok has taken something of a lead here, but it’s somewhat crazy that we are asking commercial companies to decide what content is safe on the Internet.”
However, Gaffney also argued that more could be done to help protect the younger users of the platform, considering the main demographic of TikTok is under 34, with 32.5% of the users aged between 10 and 19, according to recent statistics.
“And a general comment from this side, this is a good move by TikTok, for users and for its PR, but it has a long way to go when it comes to privacy. The general user terms are pretty opaque and there are a number of good studies showing its data collection exceeds that of even other social media companies,” Gaffney explained.
“While this last point is debatable it’s definitely an app which is profiling our youth. If it wanted to go one step further, why not guarantee to collect no personally identifiable user data for under 18s and allow them to experience the Internet without being commercialised?”
If you’re concerned about your online footprint, check out our picks for the best VPN, which will help you stay safe from external attacks and keep all your data encrypted so you’re invisible when browsing the web.