This is an interesting experiment – TikTok has outlined how it’s looking to reduce the potentially harmful impacts of algorithm amplification by limiting the amount of videos in certain, sensitive categories that are highlighted in user ‘For You’ Feeds.
That could reduce polarization, and stop users feeling overwhelmed by some topics.
As explained by TikTok:
“We recognize that too much of anything – whether it’s animals, fitness tips, or personal well-being journeys – doesn’t fit with the diverse discovery experience we aim to create. That’s why our recommendation system works to intersperse recommendations that might fall outside people’s expressed preferences, offering an opportunity to discover new categories of content. For example, our systems won’t recommend two videos in a row made by the same creator or with the same sound. Doing so enriches the viewing experience and can help promote exposure to a range of ideas and perspectives on our platform.”
That, in itself, helps to broaden the TikTok experience, and keep things fresh. But now, TikTok’s also looking to expand its system limits to ensure that users are not shown too much content on certain topics.
“As we continue to develop new strategies to interrupt repetitive patterns, we’re looking at how our system can better vary the kinds of content that may be recommended in a sequence. That’s why we’re testing ways to avoid recommending a series of similar content – such as around extreme dieting or fitness, sadness, or breakups – to protect against viewing too much of a content category that may be fine as a single video but problematic if viewed in clusters.”
Which is actually a key concern, with algorithms, utilizing binary qualifiers, working to show you more of what you engage with, without the context of why you might be viewing or otherwise interacting with certain clips.
If you’re in a vulnerable state and you end up watching more videos related to similar concerns, that could indicate to an algorithmic system that you’re interested in more of that content – when really, routing more of that material to you, at that time, could actually cause more harm, which a machine learning system can’t understand, without additional guidance.
It’s a smart exploration by TikTok – and while it won’t necessarily be able to catch all possible instances of potential harm in this respect, if it can limit the impacts of some of the worst elements, that could be significant.
In addition to this, TikTok’s also developing a new option that would enable people to choose disqualifying words or hashtags, associated with content that they don’t want to see in their ‘For You’ feed, to reduce instances of unwanted exposure, based on personal preference.
That could be diet videos, make-up tutorials – whatever issue you find triggering, you would be able to reduce, and maybe even eliminate from your feed entirely, lessening the potential impacts of such in your experience.
Given its popularity among younger users, this is a critical element of focus for TikTok, with the platform already under significant scrutiny over the impact that its trends and content can have on young, impressionable users, in varying ways.
Giving people more capacity to control their ‘For You’ recommendations could be a big step – but even further, enhancing its automated recommendations around potentially sensitive topics could be even more valuable, as not everyone has the self-awareness to be able to moderate their own experience in this way.
Considering its rapid growth, TikTok has done fairly well in providing algorithmic protections thus far, and its addictive algorithm, and capacity to pull from a huge pool of publicly uploaded clips, really is the app’s secret sauce, and the reason for its massive success.
With that in mind, intelligent explorations like this are key to keeping users as safe from harm as possible, and TikTok, which doesn’t rely on personal connections in the same way as other social apps, has more capacity for such, which is a key element.