Are we, denizens of the digital age, at the mercy of algorithms and bots forever?
Most of us are aware of the confirmation bias inherent in our 2022 social media feeds. We know that we see posts for reasons that may have more to do with companies’ interests than our own. Conversations about what’s wrong with social media abound, and there’s no lack of finger-pointing at Big Tech and media companies run amok.
We know all of this, but not necessarily what we can do about it — or if we should do anything at all.
Honestly, it would be great to have some actionable steps to improve online experiences on individual and corporate levels. Here are some suggestions.
Take Matters Into Your Own Hands
Wired’s Chris Seneca laid out five steps to counteract the bots and see beyond the view the algorithms have programmed for us. Some measures are a bit labor-intensive, and far from foolproof, but any steps are better than sitting back and letting the internet be cordoned off quietly (right?).
#1 “Like” everything. Algorithms can’t categorize you if they can’t determine what you really like. Be generous with your thumbs ups and hearts, and you’ll be rewarded with something beyond grateful friends who are glad you noticed their posts. This should result in a “continued churn of content from confused platforms who must show you everything as they assess and reevaluate your preferences.” Translation: the more diverse the content you interact with, the more varied what you see will become.
#2 Follow media aligned on all sides. We are advised to swallow our pre-conceived notions and follow “prestige” publications across the political spectrum. Better still, follow outlets in several countries or continents; a combination of Fox News and the BBC should keep your newsfeed clear of the most polarizing stories pushed by anonymous trolls who are out to influence and incite anger rather than inform.
While you may not concur with the liberal-left to conservative-right breakdown provided by AllSides Media Bias, it could offer some food for thought in terms of publications to diversify your news intake.
#3 Pay attention to the followers of those you follow. Individuals who have a great disparity between their amount of followers and the number of people they themselves follow often acquire outsized influence on social media, according to Wired, quoting third party research: “Be cognizant of friends who post frequently but follow few others, and consider muting their accounts if they appear to engage with only like-minded thinkers.”
#4 Change your feeds to focus on the latest content. Both Facebook and Twitter allow users to view the most recent posts first, but the setting is difficult to find and often reverts without warning to the default view, which highlights how much the platforms want you to rely on their algorithms. Changing this setting is worth the effort, says Wired. Some users may immediately see posts from accounts that have been hidden for years.
MakeUseOf offers this tutorial for switching Facebook and Twitter feeds to the chronological version.
#5 Create space for new voices. Chances are you aren’t coming to the confirmation bias conversation with a clean slate, so to really start to see changes in your feed, you may need to consider temporarily muting celebrities whose accounts share your perspectives to make room for the rest of your efforts to break through.
Who knows? You might also find that you hear more from the people who likely motivated you to sign up for a Facebook account in the first place — your IRL friends, family, and colleagues — who (hopefully) represent a diversity of views and backgrounds.
Wired thinks that if those steps are followed, posts from a balanced account will be shown more frequently and to a wider array of individuals “across all algorithmic spectrums while the platforms attempt to categorize you.
One wonders if anyone will, in fact, care enough to put the effort into actually doing this.
And should individuals bear the responsibility for making the internet a livable, enjoyable place — or should the companies that run (and monetize) these platforms have more accountability for the actions of their users, bot and human alike?
Battling the Bots
Technologist, virtual reality pioneer and social media critic Jaron Lanier‘s position is both/and: Individuals will need to take responsibility for their online activities, and companies will need to furnish the tools and structures to enable them to effectively do so.
However, in an article for The Atlantic, Lanier argues that the polarization, disinformation, and bot problems are not likely to be solved by diversifying the sources of our newsfeeds. Instead, he thinks that the answer lies in the formation of more, smaller like-minded communities online.
Lanier shows his age a bit (or the age of the demographic he’s appealing to) when explaining the idea: ‟It would be like starting up a zine, a band, or a partnership. You’d find some people with whom you feel compatible, people whom you trust, and then you’d work together to create a brand — a name for your group to be applied to a common feed of posts. Only groups like this would be able to post, not individuals, though individuals would still identify themselves, just like they would when playing in a band or writing in a magazine. Individuals could join multiple groups and groups would self-govern; this is not a heavy-handed idea.”
Reddit’s sub-reddits, Facebook groups, and GitHub all rely on some variation of these ideas.
Technology and societal trends are changing the internet. Concerns over data privacy, misinformation and content moderation are happening in tandem with excitement about Web3 and blockchain possibilities. Learn more about the tech and trends driving humanity’s digital future with these hand-curated articles from the NAB Amplify archives:
- Web3 and the Battle for the Soul of the Internet
- The TikTok-ing of Western Civilization
- Our Collective (and Codependent) Relationship with Data
- Social Media Is a Disaster for Democracy, But Who’s Going to Change It?
This approach solves the need for moderation, while also enabling free speech in a fair and consistent manner, neither infringing on an individual’s desire to avoid certain types of content nor preventing someone’s ability to share their ideas.
‟What if a bunch of horrible people decide to form a group? Their collective speech will be as bad as their individual speech was before, only now it will be received in a different — and better — social-cognitive environment,” Lanier argues.
This wouldn’t get rid of disinformation or hate speech online, but Lanier argues that these problems have always been among us. Their magnitude is, at least in part, so concerning because it’s difficult to avoid this type of content, no matter how scrupulously curated your social media circle.
‟When horrible speech is mixed into an ambient feed, the world feels horrible. But when online experience comes only from branded sources — and, once again, these groups would be formed through free association — then we can compartmentalize what we see.”
Essentially, Lanier believes that encountering something jarring — like a racist comment or violent video — can be more easily digested or understood when it’s presented in a context like an excerpt from a news article or even a post shared by group dedicated to a racist ideology.
In Lanier’s concept, group members would be incentivized to moderate — self-police — those who infringe upon the rules of conduct (determined by members, not employees) and report bots that dilute the group’s value.
That value may be literal or intangible, BTW. Lanier explains that the important thing is that ‟any benefits of membership would be shared by all who joined. I have my own hopes for how this would work: I’d like to see people in groups agree to smooth out the uncertainty of fate by divvying up rewards — money from micropayments, subscriptions, or patronage, for example — such that everyone within the group has some benefits to help them get by, while still amplifying compensation for individual members who contribute a lot.”
ALSO ON NAB AMPLIFY:
Lanier doesn’t dive into the logistics of such a change, nor does he guarantee that this approach will resolve all of social media’s ills. But this ‟power to the people” approach may hold a certain attraction to companies and changemakers. It might also be a nice solve for those who are worried about modern monopolies and Big Tech.
Some of us who want a kinder, gentler internet are already searching for it.
On a personal level, I’ve found that my remaining interest in Facebook lies with the groups I’ve joined. These are well-moderated communities with common goals and interests (buy-nothings, local happenings, cats and a selfcare podcast).
Maybe Lanier is on to something.
Adrian Pennington also contributed to this article.