The Social Media Cigarette
Like millions of other thirty-somethings, I now cherish my unobserved childhood. My cringe-worthy preteen phases were not permanently preserved on the internet for all to see. This is a luxury that generations that follow me will never have; from their first steps, to their piano recitals, to their shitty garage bands, everything will be cataloged, documented, and loaded into a hard disk somewhere out in the ether. Is it any wonder our kids are struggling so desperately with depression? The enormity of the planet weighs on them at every passing second. There is always someone better than them at the things they love, always someone better looking, and around every digital corner is yet another reason to feel unremarkable. Imagine how hard it must be, for these ever-observed people, to change their minds. So much of our communication now takes place in digital spaces, where it becomes frozen in time, archived for ease of access later. People used to be able to abandon their former mistakes; now, those mistakes have a digital half life, and they echo into the future.
Social media is a new form of currency, another expression of influence and power. We are locked in a competition to get the cameras in our mobile phones in front of the most compelling and interesting locales. We all believe that we do this for wholesome reasons, but undeniably, there’s a sinister aspect to what we post, too. The pictures of our vacations, of our posh and overpriced dinners at a hip restaurant are all designed to needle our followers while simultaneously corralling their approval. The posts usually have the intended affect on their targets, and as a result, the social media generation is deeply depressed and terrified of missing out:
“Social-networking sites like Facebook promise to connect us to friends. But the portrait of iGen teens emerging from the data is one of a lonely, dislocated generation. Teens who visit social-networking sites every day but see their friends in person less frequently are the most likely to agree with the statements “A lot of times I feel lonely,” “I often feel left out of things,” and “I often wish I had more good friends.” Teens’ feelings of loneliness spiked in 2013 and have remained high since.
This doesn’t always mean that, on an individual level, kids who spend more time online are lonelier than kids who spend less time online. Teens who spend more time on social media also spend more time with their friends in person, on average—highly social teens are more social in both venues, and less social teens are less so. But at the generational level, when teens spend more time on smartphones and less time on in-person social interactions, loneliness is more common.
So is depression. Once again, the effect of screen activities is unmistakable: The more time teens spend looking at screens, the more likely they are to report symptoms of depression. Eighth-graders who are heavy users of social media increase their risk of depression by 27 percent, while those who play sports, go to religious services, or even do homework more than the average teen cut their risk significantly.
Teens who spend three hours a day or more on electronic devices are 35 percent more likely to have a risk factor for suicide, such as making a suicide plan. (That’s much more than the risk related to, say, watching TV.) One piece of data that indirectly but stunningly captures kids’ growing isolation, for good and for bad: Since 2007, the homicide rate among teens has declined, but the suicide rate has increased. As teens have started spending less time together, they have become less likely to kill one another, and more likely to kill themselves.” [Atlantic]
In the past, you might not know that you weren’t invited to a party, but now, you’ll see the pictures on Facebook and wonder why you didn’t make the cut. It’s easy to see why social media consumption might directly cause depression, especially when you consider what most of us choose to post. We aren’t uploading screenshots of our overdraft fees; we’re posting images of our expensive vacations! We don’t create photo albums of our tooth decay; we post things that others can envy. We project the best possible image of ourselves onto the internet, and as employers and potential employers routinely scour through our digital footprints, we are being provided with even more incentive to do so. As our screen time increases, the evidence seems to indicate, our happiness and contentment declines. These symptoms are not universal, of course, and the correlation between social media usage and depression isn’t completely ironclad.
But beyond emotional distress, does our constant screen usage have medical implications as well? The “blue light” of our phone screens negatively influencing our R.E.M sleep appears to be widely accepted science in the neurological field of study. Without slipping into the mobile-phones-cause-brain-cancer alarmist nonsense, is it possible there are other problems we’re missing?
The answer is something of a mixed bag. It’s possible our eyesight is getting collectively worse; our vision evolved to spot fast-moving threats in the distance, and now, most of the important parts of our world happen a few inches from our face. An epidemic of near-sightedness could be in our future. “Textneck,” a loosely-defined, repetitive strain injury seems to be gaining steam as well. Our shoulders, neck, and back are ill-suited for the marathon screen-watching we compulsively put them through. As our lives become more sedentary, so do our bodies, and the health risks arising from that sort of lifestyle are numerous. But what can our phones really do to us? Are they really all that scary?
The Power of Suggestion
I am an avid user of YouTube like many of you. I pay $14.99 a month for Google Music, a streaming music service, and YouTube Red, a premium, ad-free version of YouTube is bundled with it. I even bought a subscription to YouTube TV in order to watch my favorite sports team without being hassled with regional blackouts. I am in love with all of these purchases, and I feast like a king on media every waking hour of my life.
One of the best things about YouTube is its recommendation engine, and the algorithm that powers it behind the scenes is incredible. At a high level, the machinery looks something like this on the inside:
These tools, often, are powered by neural networks or machine learning. They function autonomously, they correct their own mistakes automatically, and with each new point of data, the entire system becomes more efficient. They take nearly everything into account; the time since your last watch, the number of your previous impressions, the time you actually spent watching the video, along with countless other factors.
The information YouTube has collated about me is already actionable. I actually spend time and effort to train it more effectively. When it makes a suggestion I don’t like, I can let the app know, and I can even tell YouTube why I didn’t like the recommended video. The system learns from my input and improves automatically.
On some mornings, I like to watch Al-Jazeera’s news broadcast as I drink my coffee. YouTube has learned from this, and if I open the application during the morning hours, their live stream is the first suggestion I see. At night, I like to put on British panel shows as cozy background music while I fall asleep. The algorithm has learned this, too, and my suggested videos appear to adjust based on the time of day I choose to open the application.
It’s hardly Skynet, right? YouTube making autonomous adjustments doesn’t seem all that sinister at first. But what happens if my interests were different? What if – instead of British panel shows and news feeds – it delivered a stream of Holocaust denial videos? This technology, like any technology, works the same for detriment or benefit. It will dispassionately deliver information to me, without regard to what that information does to me. Recommendation engines and algorithmic targeting have only one goal – to get you to engage.
“Facebook’s draw is its ability to give you what you want. Like a page, get more of that page’s posts; like a story, get more stories like that; interact with a person, get more of their updates. The way Facebook determines the ranking of the News Feed is the probability that you’ll like, comment on, or share a story. Shares are worth more than comments, which are both worth more than likes, but in all cases, the more likely you are to interact with a post, the higher up it will show in your News Feed. Two thousand kinds of data (or “features” in the industry parlance) get smelted in Facebook’s machine-learning system to make those predictions.
What’s crucial to understand is that, from the system’s perspective, success is correctly predicting what you’ll like, comment on, or share. That’s what matters. People call this “engagement.” [Atlantic]
Their incentive, in and of itself, is pernicious. Facebook doesn’t present its users with information that it believes is constructive or helpful. On the contrary, it delivers results that it believes the user wants. Your conservative uncle won’t get well-reasoned and responsibly sourced video analysis of American politics after he ‘likes’ an Ann Coulter video on Facebook. The algorithm will drive him deeper, will show him content that makes his brain release dopamine. The machine doesn’t take the well-being of its clientele into consideration, which, at the very least, should give us some pause.
You’ve almost certainly seen the phenomenon of thumbnail exploitation; someone posts a meme, and it usually consists of text branded on top of an image of a famous comedian like Will Ferrell or Kevin Hart. Before you’ve even read the joke, your brain is ready to laugh. The author of content like this has made a calculated move; they know our attention spans now function in the space of milliseconds, and the finished product will need to provide instant gratification.
YouTube understands the importance of thumbnails, too, and many of the most viral videos rely on arresting preview images to attract clicks. Any successful YouTuber learns the tricks of the thumbnail trade; they perfect exaggerated facial expressions, they superimpose bright title text on the image, and their descriptions are loaded with words in all-caps. It’s designed to thrill you, to intrigue you, before you even click. It’s an advertising trick like any other, meant to bypass your willpower and better judgement.
This manipulation even extends to children; YouTube has had to clamp down on shrewd channel operators who post videos designed to get millions of children to click. Monetizing the traffic of toddlers is a fairly dirty trick, and Google, Facebook, and Twitter learn about every single one of these dirty tricks each time they have to snuff them out. What these behemoths will know about all of us in ten years – about what we are instinctively drawn to, what we find irresistible, what our brains are hard-coded to engage with – will, without question, inform the future decisions these companies will make. Those future decisions will have one primary motivation; to enrich private companies and return value to their sacred shareholders. And it will be these tech titans – Apple, Google, Facebook, Twitter, et al – that will control the primary (if not, in due time, the exclusive) means of news delivery.
News was, not long ago, something that someone made a conscious choice to consume. One flipped on the television to catch the evening broadcast, or leafed through the pages of a newspaper in the morning. Now, news is something that happens to you, and often, it happens without much in the way of your consent. Your social media feeds pipe in these auto-playing targeted videos with laser-guided precision; and the targeting processes have been digitally honed with the sharpest tools of the advertising industry. The chimera of info-tainment, welded together with constant smartphone access, is a relatively new phenomenon, and the implications of its widening influence are still unknown, but we know the post-internet generations are struggling in ways their predecessors did not:
“The advent of the smartphone and its cousin the tablet was followed quickly by hand-wringing about the deleterious effects of “screen time.” But the impact of these devices has not been fully appreciated, and goes far beyond the usual concerns about curtailed attention spans. The arrival of the smartphone has radically changed every aspect of teenagers’ lives, from the nature of their social interactions to their mental health. These changes have affected young people in every corner of the nation and in every type of household. The trends appear among teens poor and rich; of every ethnic background; in cities, suburbs, and small towns. Where there are cell towers, there are teens living their lives on their smartphone.
To those of us who fondly recall a more analog adolescence, this may seem foreign and troubling. The aim of generational study, however, is not to succumb to nostalgia for the way things used to be; it’s to understand how they are now. Some generational changes are positive, some are negative, and many are both. More comfortable in their bedrooms than in a car or at a party, today’s teens are physically safer than teens have ever been. They’re markedly less likely to get into a car accident and, having less of a taste for alcohol than their predecessors, are less susceptible to drinking’s attendant ills.
Psychologically, however, they are more vulnerable than Millennials were: Rates of teen depression and suicide have skyrocketed since 2011. It’s not an exaggeration to describe iGen as being on the brink of the worst mental-health crisis in decades. Much of this deterioration can be traced to their phones. [Atlantic]
Google – the company who owns YouTube – is first and foremost an advertising company. It is their primary business. These predictive algorithms are the perfect advertising tools. They adapt as their audience changes. The predictive analytics engines that drive them are shockingly powerful. One such system, deployed by the retailer Target, had determined a female customer was pregnant from her purchase history before the woman even knew herself.
What humans want is often at odds with what they need. Our desires are often so far removed from reality that we go out of our way to nurse them. We’ve all got delusions of grandeur buried in our brains somewhere, and they were hard enough to control before the internet, and now, they’re even more difficult to keep in check. Internet advertising is a never-ending attempt to erode your will power. It is a constant war of attrition against your better judgement. At no point in human history have advertisers had such a rich and limitless source of data about their consumers, about what they buy, about what they buy after they buy something, and how they feel after they’ve bought the thing your want them to buy. Even the movements of your mouse cursors and fingertips are tracked when you visit certain sites.
It is this well-oiled, data-hungry, perpetually improving machine that delivers what you know about this world to you. Not an editor of a newspaper, not the owner of a cable news network, but a marketing monolith that gets faster, better, and stronger with every passing millisecond. As internet access and mobile phones increasingly become codified into law as human rights, the writing begins to appear on the wall: eventually, this will be the only way to get news in the future, and whomever holds the keys to that empire can control the way most of us think about the world.
Press in the Era of Fake News
Historically, new avenues of information become breeding grounds for sensationalism before humanity collectively balances them back out. The printing press gave us the broadsheet newspaper, and, after barriers to publication started to disintegrate, it gave us The National Inquirer. The internet is still in its infancy, and we don’t yet know what adjustments need to be made. Most of us can’t always tell the difference between a good source and a bad one online, and that’s a serious problem.
We protect professional designations, by law, for our collective safety. If anyone could call themselves a doctor, a lot of people would die. If anyone could call themselves an engineer, a lot of important structures would collapse. Journalism has scarcely little protection. In our era, anyone can open a website, anyone can publish news, and anyone can develop a dedicated digital fan-base. The consequences of this reality are already becoming evident, and will only get worse as time progresses.
When Edgar Welch stormed into a D.C. area pizza restaurant, he was armed with an AR-15, locked and loaded – literally – with the intention of ending a child abuse ring he had read about on the internet. He had decided to take action on a conspiracy theory known as PizzaGate, which falsely claimed that various Beltway power brokers were trafficking children inside of a pizzeria. Welch acted on information he had received from dubious, unhinged, and pseudo-journalistic sources. The details of his digital diet are perhaps known only to the American intelligence agencies who investigated Welch after the fact, but it begs the question: what did the social media algorithms suggest to him? What role did they play in his descent? What videos did YouTube ‘recommend’ to him?
This isn’t to implicate tech companies in the actions of individuals who exercise their own free will, nor is it a Luddite-styled admonishment against the advance of technology. It is, however, a cautionary tale of how the world has changed, and a call for us to look into how we should change with it. Smokers – at every impasse – have the choice to never smoke another cigarette ever again. This would be an overly simplistic analysis of this problem today, largely because it ignores the tremendous neurological strain of addiction that regular nicotine intake creates. Fixing this may not be as simple as people self-regulating. It may not be as easy as just telling people to stop clicking on the clickbait. It might be the sort of thing normal people can’t quit cold-turkey:
“Chamath Palihapitiya, who worked as Facebook’s vice president for user growth, was speaking at an event run by the Stanford Graduate School of Business on 10 November in which he described feeling “tremendous guilt’ in helping the company attract two billion users.
“We have created tools that are ripping apart the social fabric of how society works,” he told the audience.
He advised people take a “hard break” from social media, describing its effect as “short-term, dopamine-driven feedback loops.”
If you feel a little queasy about modifying the first amendment, well, good. So do I. Freedom of assembly, religion, press, and speech are the cornerstones of American democracy. By changing it, we’re playing with live ammo. In one sense, this is an incredibly important discussion, and in another sense, it doesn’t have to be. We’ve already amended the Constitution on twenty-seven separate occasions. Times change, and good governments change with them. It is at this point that we must recognize that our nation – with its space travel, artificial intelligence, neural networks, and cryptocurrencies – would be wholly unrecognizable to the people who founded the country by winning a war with muskets and cannons. We cannot rely on the ‘marketplace of ideas’ to bail us out of this quagmire. The dopamine-loops are too powerful. When Facebook crafts, customizes, and targets every piece of information you see on your feed directly at your instinctive desires, our collective good judgement cannot save us. So what would a more prudent media policy look like in the United States? Mimicking our neighbors across the Atlantic might be a good place to start:
“Beyond rallies, debates, and primaries, political ads practically define election season in America. Especially in swing states, it can be impossible to turn on the television or the radio without being inundated.
But in the U.K., “We have very strict rules where you’re not really allowed to advertise via television or radio as a political party,” says Katie Ghose. She’s chief executive of the Electoral Reform Society, a nonpartisan group that focuses on improving the way campaigns operate.
The internet has allowed for a bit of American-style political advertising in the U.K., but British campaigns don’t have money for the hyper-saturation that Americans are used to. And political spending by outside organizations is not allowed.
“We just think that there is really a grotesque amount of money spent in the U.S. on politics,” says Ghose. That’s a pretty widely-held view in Britain, which highlights a big cultural difference between the U.S. and the U.K.: In America, campaign laws value free speech above all else. The Supreme Court has ruled that limits on campaign spending may amount to limits on speech. In the U.K., people talk less about free speech and more about what Ghose calls “a level playing field.”
“If you have one party that’s just able to amass a load of money and shout louder than the others, that’s not healthy for democracy,” Ghose says. “And we wouldn’t interpret freedom of speech to mean an unlimited ability to spend, spend, spend.” [NPR]
What we encounter on our Facebook feeds every day is media. Facebook is a media company, no matter what they say in the press. The conversation about digital responsibility as it pertains to law is already underway in the United States. We understand innate unfairness online selectively, when we really ought to paint with a far broader brush:
“Last month, state representative Chris Lee publicly launched his effort to pass legislation regulating the sales of video games with loot boxes in Hawaii. In a press conference flanked by religious and business leaders, parents, and affected gamers, he called out “predatory practices in online gaming and the significant financial consequences they can have on families.” Battlefront 2 got specific condemnation as a “Star Wars-themed online casino” in Lee’s telling.
Rep. Lee doesn’t use the word “predatory” lightly in describing loot boxes. He says it’s an appropriate adjective for game makers who are knowingly exploiting addictive gambling mechanisms to manipulate players and increase their bottom line.” [Arstechnica]
I think most Americans would support bills like the one introduced by Representative Lee; nobody wants their kids to run up a thousand-dollar micro-transaction bill on their credit card, and the sort of protections Lee has proposed wouldn’t seem controversial to most. But we’re ignoring the canaries in other mines. The gaming industry isn’t the only segment of the economy operating on these principles – nearly every app, every website, every internet-connected thing does too, and with each passing millisecond, they all get smarter and more efficient.
Your phone is a portable slot machine. The content you interact with is loaded with hooks designed to ensnare you, and the developers of nearly any application are looking for new ways to consume more and more of your time. We don’t know exactly how the law and the digital world should engage upon each other, but the dialogue has already begun, and the last thing we should do is shy away from this uncomfortable discussion.
Tech companies have no ceiling to their success. Jeff Bezos got his start selling books online, and now he is one of the most powerful men on the face of the enitre planet. Elon Musk builds spaceships and electric cars, and his origins find their beginnings in PayPal. Raw power can always find a way to move laterally, and what starts as an app can now blossom into anything.
Tying these individual strands of the new media landscape together is a gargantuan task. It’s important to keep in mind that it’s never all one thing or another; it’s not just free speech, not just a free press, and not just the influence of technology. There are no scapegoats, but the entire image, taken together, paints a bleak picture that will get worse with time. We haven’t uncovered the hidden dangers yet, and they probably won’t be made clear to us until the damage is already done.
“When they start taking that step of targeting information, I think there’s an argument to potentially be made that they’re no longer just like any other publishing outlet but that they’re actually actively participating in who sees what and with what degree of impact,” said Alexa Koenig, the director of UC Berkeley’s Human Rights Center.
Facebook doesn’t make the content and isn’t liable for it beyond its own community standards, but it does make, manage, and moderate the systems that move certain content to the top of a user’s News Feed, in the service of keeping more users engaged with content on the platform and thus placing more eyeballs on ads. Algorithmic content curation and targeting, this argument goes, is still an act of curation and targeting…”
“As the director of the Dangerous Speech Project, the American University adjunct associate professor Susan Benesch understands the common reflex to try to use law to regulate online content—or to force platforms to do it—but believes there are a lot of good reasons for caution. For one thing, Benesch argued, laws against specific kinds of speech tend to be more often used against the marginalized than the powerful. For another, enforcement of censorship laws tends to be overzealous, especially when censorship is outsourced by governments to private actors.
Benesch doubts that Facebook or any other company has software sophisticated enough to automate the kind of nuanced comprehension needed to understand the harms of specific kinds of speech in a variety of social contexts, and would be likely to take down too much content. “Laws that compel people to do things exert enormous pressure toward overregulation. The software just isn’t ready to effectively discern at scale.”
Social media is our generation’s nicotine. When the purpose of these ubiquitous products is to bypass our willpower, when their goal is to appeal to our impulses, then the comparison to the tobacco industry begins to make more and more sense. They don’t care about us. They care about monetizing us, and history tells us that serious, long-term damage is in our future. Crucially, a delivery method such as this, one that operates by manipulating our brains on an instinctive level, should not be the way in which we primarily interact with the world. It shouldn’t be the way we read news. It shouldn’t be the way we make decisions. Are there technical limitations to what these companies can do to control the content that gets published on their platforms? Certainly. Is there a chance that our society might over-correct? Absolutely. But these are quibbles, minor disagreements in approach, rather than total non-starters.
We already have similar protections and safeguards in place for other industries. I couldn’t advertise a cure for cancer and take the profits and split after the product is found to be a fraud. I have to submit that product to the FDA, and I have to confirm that my claims about the pill are accurate. Fox News has no similar restraints. They can be openly deceptive, with little to no consequence. They can publish stories that are flatly untrue with impunity. It isn’t just Fox News; many other networks are ardent partisans, or in the case of RT and Al-Jazeera, are wholly owned and operated wings of state-run media.
The unsightly truth is that millions of Americans cannot intelligently navigate through all of the fabricated and partisan news without falling victim to intentionally misleading information. We need new training wheels. When Alex Jones has the ear of the White House, incontrovertibly, undeniably, something has gone terribly awry on our watch.
For whatever it’s worth, I wish I could trust the marketplace of ideas. I would love for all Americans to be smart enough to steer clear of the potholes, and I wish I could flippantly dismiss the crackpots. But we tried ignoring them, didn’t we? We refused to give them our attention, and they won anyway. The conflict is asymmetrical between pluralist and democratic voices and fascist and fundamentalist ones. The former will, in the name of tolerance, accept all input on principle, and the latter will use a free platform to advance ideologies that conspire to destroy that platform for everyone else. This is an imbalance that cannot go unaddressed forever.
The stubborn voices will tell us at this juncture that we need to defeat nefarious voices in the media through the battle of debate. This suggestion is incredibly naive. Jones’ own legal council claims he is a performance artist. How do you debate with a performance artist? How do you pass a formal argument to an actor who says openly they do not negotiate in good faith? The first amendment to people like Alex Jones is nothing more than a means to an end, a crude tool they use to milk a paycheck out of vulnerable people who don’t know any better. To them, the first amendment is the teeth they use to rip apart their prey. By ignoring the abuses, we become their accomplices. News isn’t sterile, bland, and dispassionate, at least not anymore. The news is transformative now; we can watch an instant replay of a law enforcement officer ending someone’s life, or take a 360 degree, virtual reality tour of the bombed-out remains of Aleppo. These new abilities have virtually no parallel in human history.
The next crisis in journalism will almost certainly come from technology. Namely, from advances in image, sound, and video manipulation software that will soon give us the ability to fabricate videos and audio recordings entirely. Imagine the panicked social media posts that will spring up when someone can fake a conversation between Trump and Putin, or can simulate a recording of President Obama admitting he was born in Kenya. How will we verify what is basically true and false? Who will we be able to trust?
This prospect is particularly worrisome, especially when you consider that much of what we spend our time looking at online is already fake. Lots of girls and women log into Instagram and Pinterest, and fill their retinas with images of unattainable and unreasonable beauty, usually obtained by surgical procedures or Photoshop. Men will often split their online time alternating between the fantasy worlds of pornography and video games. Technology makes the world less painful, and it does that, largely, by prying us away from our painful realities.
Instinctively, we evaluate threats on a solitary basis. We have a tendency to see things as all or nothing. But a blizzard isn’t one giant snowflake; it’s a confluence of trillions of different ones. To borrow from a rather grim example, we know people in North Korea died of starvation during their famine, even if the causes of death were attributed to other things:
“In a famine, people don’t necessarily starve to death. Often some other ailment gets them first. Chronic malnutrition impairs the body’s ability to battle infection and the hungry become increasingly susceptible to tuberculosis and typhoid. The starved body is too weak to metabolize antibiotics, even if they are available, and normally curable illnesses suddenly become fatal. Wild fluctuations of body chemistry can trigger strokes and heart attacks. People die from eating substitute foods that their bodies can’t digest. Starvation can be a sneaky killer that disguises itself under bland statistics of increased child mortality or decreased life expectancy.” [Demick, P140]
Who’s to say, at this point in human history, that our digital habits aren’t causing problems under different names? Isn’t it possible that a rise in childhood obesity or juvenile diabetes could be caused by our newly-sedentary lifestyles? Facebook is now cited in one-third of divorce fillings; what does constant exposure to that kind of stress do to us long term? Do we really know that much about the science of how all of these disparate pieces of our always-connected lives influence us?
Sure, there’s a lot of brain dead pseudo-science in the Luddite backlash crowd; people who claim that they’ve been sickened by everything from RF signals emitted from cell phone towers and WiFi routers to solar panels and wind-farms. Others believe the microscopic levels of radiation from our phones cause, or worsen existing cases of cancer. I’m not eager to join their ranks. These claims are all dubious, and established science has continually failed to find little to no basis in fact to support them. Sorting the fact from the fiction becomes more difficult in a climate like this, and I’m quite keen to avoid the crackpot pitfalls. In a larger sense, I don’t want to lose the forest for the trees either. Technology is a miracle in so many ways, and it facilitates things we could only dream about fifty years ago. I couldn’t deny the truth of this reality, even if I wanted to.
But the constant access to our phones, to the internet, to social media, to pornography, to the perpetual availability of enthralling and ever-intensifying stimuli, has to have changed us somehow. My phone is the first thing I see in the morning, and the last thing I see before I go to bed at night. I use technology to hide from the silence, to hide from the tyranny of my own thoughts, to escape from my boredom, from my anxiety, from my bouts of fitful impatience. I suspect the same is true of many of you, too. The end result of all of this can’t be nothing. My aching concern is that it has not changed us for the better, but for the worse.
The Hashtag Moment
On October 21st, Stephen Paddock murdered 58 people and wounded 546 others. The carnage was surreal, nearly unprecedented in its scale and scope. This mass shooting took place in the heart of Las Vegas, Nevada. Paddock had ambushed unarmed civilians with volleys of rifle fire from a high-rise hotel.
The N.H.L. had created a new expansion hockey club for the 2017 season in Las Vegas, and Paddock’s massacre came within a week of the team’s first home opener. As a sign of respect, the organization stripped all of the advertising off of the boards, and replaced them with the words “#VegasStrong.”
I don’t mention the organization’s response to the tragedy to criticize them – the tribute was widely praised, and the ceremony was better than most others, honoring not just the law enforcement officers who responded to the crisis, but the nurses, EMT’s, doctors, and surgeons as well, who are often not thanked enough for their invaluable public service.
Vegas’ ceremony was noteworthy because it was another hashtag moment; a social media phenomenon we are now all accustomed to, a word which has wormed into our collective lexicon. Hundreds of people are hit by gunfire, and American society at large responds by taking out their phones. The “thoughts and prayers” reflex – so expertly explored by comedian Anthony Jeselnick – stands as an indictment to our narcissism and naivete.
Maybe there is something to be said for the digital ‘moments of silence’ posts. Perhaps I’m jaded, and any attention is better than nothing at all. Maybe those who have lost loved ones find some form of solace in passing expressions of trite concern from strangers. Maybe humanity needs do-nothing, feel-good posts, and we shouldn’t complain about how others decide to spend their digital time.
When you finally kick the bucket, what will your hashtag be? What sort of social media tributes will I inspire? Has the Tweet become our new gravestone? Can you summarize an entire life with a status update? Can you summarize your own in two hundred and eighty characters? I like to think of myself as a modern man, one who’s comfortable with the rapidly changing world of tech, but the taste for tolerance, when it comes to these blithe, virtue-signaling posts, sours quite quickly in my mouth.
Social media always starts with you. You open the application, and you’re logged into your profile; a shrine of your own creation, containing images of you, videos of you, to be viewed by your friends, your family, and your colleagues. Your social media temple sticks around long after you’re gone, in many cases, people end up treating these profiles exactly the same as tombstones, leaving mournful comments, and making their grief public. The quiet dignity of choosing to lick your own wounds in peace, without fanfare, has seemingly gone extinct, and what often replaces it is a twenty-one-click salute. Admittedly, I’m teetering dangerously close to an empty appeal to naturalism. There’s nothing inherently wrong with our new world order, and while social media does naturally lend itself to self-obsession, there are so many examples of these tools being used for good, rather than evil. Again, it’s never all one thing, or all another. But as our culture, as our traditions, as our perception of the world around us gets contorted by the phones in our pockets, we should ask ourselves if we like the people we’ve become.