Social Media

Social media is giving us trypophobia

Something is rotten in the state of technology.

But amid all the hand-wringing over fake news, the cries of election deforming Kremlin disinformation plots, the calls from political podia for tech giants to locate a social conscience, a knottier realization is taking shape.

Fake news and disinformation are just a few of the symptoms of what’s wrong and what’s rotten. The problem with platform giants is something far more fundamental.

The problem is these vastly powerful algorithmic engines are blackboxes. And, at the business end of the operation, each individual user only sees what each individual user sees.

The great lie of social media has been to claim it shows us the world. And their follow-on deception: That their technology products bring us closer together.

In truth, social media is not a telescopic lens — as the telephone actually was — but an opinion-fracturing prism that shatters social cohesion by replacing a shared public sphere and its dynamically overlapping discourse with a wall of increasingly concentrated filter bubbles.

Social media is not connective tissue but engineered segmentation that treats each pair of human eyeballs as a discrete unit to be plucked out and separated off from its fellows.

Think about it, it’s a trypophobic’s nightmare.

Or the panopticon in reverse — each user bricked into an individual cell that’s surveilled from the platform controller’s tinted glass tower.

Little wonder lies spread and inflate so quickly via products that are not only hyper-accelerating the rate at which information can travel but deliberately pickling people inside a stew of their own prejudices.

First it panders then it polarizes then it pushes us apart.

We aren’t so much seeing through a lens darkly when we log onto Facebook or peer at personalized search results on Google, we’re being individually strapped into a custom-moulded headset that’s continuously screening a bespoke movie — in the dark, in a single-seater theatre, without any windows or doors.

Are you feeling claustrophobic yet?

It’s a movie that the algorithmic engine believes you’ll like. Because it’s figured out your favorite actors. It knows what genre you skew to. The nightmares that keep you up at night. The first thing you think about in the morning.

It knows your politics, who your friends are, where you go. It watches you ceaselessly and packages this intelligence into a bespoke, tailor-made, ever-iterating, emotion-tugging product just for you.

Its secret recipe is an infinite blend of your personal likes and dislikes, scraped off the Internet where you unwittingly scatter them. (Your offline habits aren’t safe from its harvest either — it pays data brokers to snitch on those too.)

No one else will ever get to see this movie. Or even know it exists. There are no adverts announcing it’s screening. Why bother putting up billboards for a movie made just for you? Anyway, the personalized content is all but guaranteed to strap you in your seat.

If social media platforms were sausage factories we could at least intercept the delivery lorry on its way out of the gate to probe the chemistry of the flesh-colored substance inside each packet — and find out if it’s really as palatable as they claim.

Of course we’d still have to do that thousands of times to get meaningful data on what was being piped inside each custom sachet. But it could be done.

Alas, platforms involve no such physical product, and leave no such physical trace for us to investigate.

Smoke and mirrors

Understanding platforms’ information-shaping processes would require access to their algorithmic blackboxes. But those are locked up inside corporate HQs — behind big signs marked: ‘Proprietary! No visitors! Commercially sensitive IP!’

Only engineers and owners get to peer in. And even they don’t necessarily always understand the decisions their machines are making.

But how sustainable is this asymmetry? If we, the wider society — on whom platforms depend for data, eyeballs, content and revenue; we are their business model — can’t see how we are being divided by what they individually drip-feed us, how can we judge what the technology is doing to us, one and all? And figure out how it’s systemizing and reshaping society?

How can we hope to measure its impact? Except when and where we feel its harms.

Without access to meaningful data how can we tell whether time spent here or there or on any of these prejudice-pandering advertiser platforms can ever be said to be “time well spent“?

What does it tell us about the attention-sucking power that tech giants hold over us when — just one example — a train station has to put up signs warning parents to stop looking at their smartphones and point their eyes at their children instead?

Is there a new idiot wind blowing through society of a sudden? Or are we been unfairly robbed of our attention?

What should we think when tech CEOs confess they don’t want kids in their family anywhere near the products they’re pushing on everyone else? It sure sounds like even they think this stuff might be the new nicotine.

External researchers have been trying their best to map and analyze flows of online opinion and influence in an attempt to quantify platform giants’ societal impacts.

Yet Twitter, for one, actively degrades these efforts by playing pick and choose from its gatekeeper position — rubbishing any studies with results it doesn’t like by claiming the picture is flawed because it’s incomplete.

Why? Because external researchers don’t have access to all its information flows. Why? Because they can’t see how data is shaped by Twitter’s algorithms, or how each individual Twitter user might (or might not) have flipped a content suppression switch which can also — says Twitter — mould the sausage and determine who consumes it.

Why not? Because Twitter doesn’t give outsiders that kind of access. Sorry, didn’t you see the sign?

And when politicians press the company to provide the full picture — based on the data that only Twitter can see — they just get fed more self-selected scraps shaped by Twitter’s corporate self-interest.

(This particular game of ‘whack an awkward question’ / ‘hide the unsightly mole’ could run and run and run. Yet it also doesn’t seem, long term, to be a very politically sustainable one — however much quiz games might be suddenly back in fashion.)

And how can we trust Facebook to create robust and rigorous disclosure systems around political advertising when the company has been shown failing to uphold its existing ad standards?

Mark Zuckerberg wants us to believe we can trust him to do the right thing. Yet he is also the powerful tech CEO who studiously ignored concerns that malicious disinformation was running rampant on his platform. Who even ignored specific warnings that fake news could impact democracy — from some pretty knowledgeable political insiders and mentors too.

Biased blackboxes

Before fake news became an existential crisis for Facebook’s business, Zuckerberg’s standard line of defense to any raised content concern was deflection — that infamous claim ‘we’re not a media company; we’re a tech company’.

Turns out maybe he was right to say that. Because maybe big tech platforms really do require a new type of bespoke regulation. One that reflects the uniquely hypertargeted nature of the individualized product their factories are churning out at — trypophobics look away now! —  4BN+ eyeball scale.

In recent years there have been calls for regulators to have access to algorithmic blackboxes to lift the lids on engines that act on us yet which we (the product) are prevented from seeing (and thus overseeing).

Rising use of AI certainly makes that case stronger, with the risk of prejudices scaling as fast and far as tech platforms if they get blindbaked into commercially privileged blackboxes.

Do we think it’s right and fair to automate disadvantage? At least until the complaints get loud enough and egregious enough that someone somewhere with enough influence notices and cries foul?

Algorithmic accountability should not mean that a critical mass of human suffering is needed to reverse engineer a technological failure. We should absolutely demand proper processes and meaningful accountability. Whatever it takes to get there.

And if powerful platforms are perceived to be footdragging and truth-shaping every time they’re asked to provide answers to questions that scale far beyond their own commercial interests — answers, let me stress it again, that only they hold — then calls to crack open their blackboxes will become a clamor because they will have fulsome public support.

Lawmakers are already alert to the phrase algorithmic accountability. It’s on their lips and in their rhetoric. Risks are being articulated. Extant harms are being weighed. Algorithmic blackboxes are losing their deflective public sheen — a decade+ into platform giant’s huge hyperpersonalization experiment.

No one would now doubt these platforms impact and shape the public discourse. But, arguably, in recent years, they’ve made the public street coarser, angrier, more outrage-prone, less constructive, as algorithms have rewarded trolls and provocateurs who best played their games.

So all it would take is for enough people — enough ‘users’ — to join the dots and realize what it is that’s been making them feel so uneasy and queasy online — and these products will wither on the vine, as others have before.

There’s no engineering workaround for that either. Even if generative AIs get so good at dreaming up content that they could substitute a significant chunk of humanity’s sweating toil, they’d still never possess the biological eyeballs required to blink forth the ad dollars the tech giants depend on. (The phrase ‘user generated content platform’ should really be bookended with the unmentioned yet entirely salient point: ‘and user consumed’.)

This week the UK prime minister, Theresa May, used a Davos podium World Economic Forum speech to slam social media platforms for failing to operate with a social conscience.

And after laying into the likes of Facebook, Twitter and Google — for, as she tells it, facilitating child abusemodern slavery and spreading terrorist and extremist content — she pointed to a Edelman survey showing a global erosion of trust in social media (and a simultaneous leap in trust for journalism).

Her subtext was clear: Where tech giants are concerned, world leaders now feel both willing and able to sharpen the knives.

Nor was she the only Davos speaker roasting social media either.

“Facebook and Google have grown into ever more powerful monopolies, they have become obstacles to innovation, and they have caused a variety of problems of which we are only now beginning to become aware,” said billionaire US philanthropist George Soros, calling — out-and-out — for regulatory action to break the hold platforms have built over us.

And while politicians (and journalists — and most probably Soros too) are used to being roundly hated, tech firms most certainly are not. These companies have basked in the halo that’s perma-attached to the word “innovation” for years. ‘Mainstream backlash’ isn’t in their lexicon. Just like ‘social responsibility’ wasn’t until very recently.

You only have to look at the worry lines etched on Zuckerberg’s face to see how ill-prepared Silicon Valley’s boy kings are to deal with roiling public anger.

Source link

About the author


Add Comment

Click here to post a comment

Your email address will not be published. Required fields are marked *

Do NOT follow this link or you will be banned from the site!