SOCIAL MEDIA AND ITS INFLUENCE
ON PUBLIC OPINION
SUBMITTED TO-MR. ABHAYACHANDRAN.K
SUBMITTED BY- GIRISH.P.KARUVELIL
B.A.LL.B (HONS)
4TH YEAR
ROLL NO. 1184
INTRODUCTION
The advent of social media introduced transformative platforms for people to share thoughts and
information in entertaining and connective ways. But the benefits are increasingly being
overshadowed by negative consequences as the monetization and manipulation of information
threatens to tear us apart.
ROLE OF MEDIA IN THE INTERNET AGE
Recent years have witnessed a dramatic change in the way people communicate and obtain
information. Consumption of traditional news media is declining .while online media has become an
increasingly important source of information. The key questions are how this communication
revolution has influenced information flows across individuals and how one can influence these
flows.
MEDIA AND DEMOCRACY
The media plays a vital role in a democracy; informing the public about political issues and acting as
a watchdog against abuses of power. During election campaigns the media provides information and
analysis about the political parties’ programmes, policies, candidates and performance.
SOCIAL MEDIA AND POLITICS
Electronic communication seems to make politicians seem more remote; there is still no connection
between politics where power is brokered, and the network society itself. In a similar way, social
media has also transformed politics in India and globally. Its effect has impacted the way candidate
campaign for their election. Social media allows politicians and political parties a method to connect
directly with people across the country at a reduced cost and greater reach than traditional media.
Social media is not simply the next in a line of communications technologies: it has also changed
everyday activities and connected people in a manner never before possible.
By any meaningful measure, the category we refer to as “social media” today encompasses massive
scale. These platforms are pervasive, and fully and effectively integrated into the public discourse
and lives of individuals.
By 2017, for example, Facebook-owned platforms already reach 86% of Internet users aged 16 to 64
in 33 countries and effectively act as the gateway to the Internet, if not the Internet itself. In a sense,
Facebook is becoming the world’s largest news source; 44% of people across 26 countries surveyed
say they use it for news.
Similarly, in the U.S., two-thirds of Facebook users (66%) get news on the site, nearly six out of ten
Twitter users (59%) get news on Twitter, and, highest, 70% of Reddit users get news on that
platform.
Similar trends exist for 18-to 24-year-olds and users in emerging economies such as the Philippines
and Myanmar.
The advantage social media platforms such as Facebook and Google have in monetizing attention
accrues from their unprecedented and large-scale collection and analysis of personal data. Less
sanguine is the use of behavioural and psychographic profiling, which can be harvested to deliver
personalized content and advertising—much of which is unregulated and invisible to all but the
recipient.
THE PROBLEMS POSED BY SOCIAL MEDIA
The early optimism about social media’s potential for democratizing access to information, and
giving voice to those who were traditionally marginalized or censored, is eroding. Indeed, as social
media platforms have grown, they have been accused of:
• Exacerbating the polarization of civil society via echo chambers and filter bubbles
• Rapidly spreading mis-and disinformation and amplifying the populist and illiberal wave across the
globe
• Creating competing realities driven by their algorithms’ intertwining of popularity and legitimacy
 • Being vulnerable to political capture and voter manipulation through enabling malevolent actors
to spread disinformation and covertly influence public opinion
• Capturing unprecedented amounts of data that can be used to manipulate user behaviour
• Facilitating hate speech, public humiliation, and the targeted marginalization of disadvantaged or
minority voices
Echo chambers, polarization, and hyper-partisanship
 Social media platform design, combined with the proliferation of partisan media in traditional
channels, has exacerbated political divisions and polarization. Additionally, some social media
algorithms reinforce divisions and create echo chambers that perpetuate increasingly extreme or
biased views over time.
Spread of false and/or misleading information
 Today, social media acts as an accelerant, and an at-scale content platform and distribution channel,
for both viral “dis”-information (the deliberate creation and sharing of information known to be
false) and “mis”-information (the inadvertent sharing of false information). These two types of
content—sometimes mistakenly conflated into the term “fake news”—are created and disseminated
by both state and private actors, in many cases using bots. Each type poses distinct threats for public
dialogue by flooding the public square with multiple, competing realities and exacerbating the lack
of agreement about what constitutes truth, facts, and evidence.
Conversion of popularity into legitimacy
 The algorithms behind social media platforms convert popularity into legitimacy, overwhelming the
public square with multiple, conflicting assertions. In addition, some social media platforms assume
user intentionality (e.g. in search queries) and conflate this with interest, via features such as auto-
fill search terms. These design mechanisms impute or impose certain ways of thinking, while also
further blurring the lines between specialists and laypeople, or between verified and unverified
assertions, thus contributing to the already reduced trust in traditional gatekeepers.
Manipulation by “populist” leaders, governments, and fringe actors
“Populist” leaders use these platforms, often aided by trolls, “hackers for hire” and bots, on open
networks such as twitter and YouTube. Sometimes they are seeking to communicate directly with
their electorate. In using such platforms, they subvert established protocol, shut down dissent,
marginalize minority voices, project soft power across borders, normalize hateful views, showcase
false momentum for their views, or create the impression of tacit approval of their appeals to
extremism. And they are not the only actors attempting to use these platforms to manipulate
political opinion— such activity is now acknowledged by governments of democratic countries (like
the UK), as well.
Personal data capture and targeted messaging/advertising
 Social media platforms have become a preferred channel for advertising spend. Not only does this
monetization model drive businesses reliant on the capture and manipulation of huge swathes of
user data and attention, it also widens the gap between the interests of publishers and journalists
and erodes traditional news organizations’ revenues. The resulting financial strain has left news
organizations financially depleted and has reduced their ability to produce quality news and hold the
powerful to account. In addition, advanced methods for capturing personal data have led to
sophisticated psychographic analysis, behavioural profiling, and micro-targeting of individuals to
influence their actions via so-called “dark ads.”
Disruption of the public square
Some social media platforms have user policies and technical features that enable unintended
consequences, like hate speech, terrorist appeals, and racial and sexual harassment, thus
encouraging uncivil debate. This can lead members of frequently targeted groups—such as women
and minorities—to self-censor or opt out of participating in public discourse. Currently, there are
few options for redress. At the same time, platforms are faced with complex legal and operational
challenges with respect to determining how they will manage speech, a task made all the more
difficult since norms vary widely by geographic and cultural context.
Issue One: Echo chambers, polarization, and hyper-partisanship
The “attention economy” is predicated on understanding and targeting individual users with a
variety of customized content. It is well documented that an individual’s search results, newsfeed,
and advertising offers are dependent on, and generated by, their digital footprint as manipulated by
the ever-evolving algorithms of social media platforms. Advertising is the most important, since it
underwrites the business model of the social media giants. The prioritization of user preferences
results in a feedback loop where the feeding of news, search results, and social network updates
that align with user attitudes and interests exacerbates and reinforces user preferences—and on
platforms such as Facebook, this tends to promote self-segregation into like-minded groups.
Few users consume purely partisan media, but social media platform design and the proliferation of
partisan media in traditional channels have exacerbated partisanship and identity polarization by
creating “echo chambers” where views get reinforced and become entrenched—and more extreme
—over time. Some part of this is design. But another important part is due to the way people decide
whether news and information online can be trusted.
A recent American Press Institute (API) study shows that when American readers see news on social
media platforms, it’s not the source of the news that matters as much as whom in their network
shares the link
This increased personalization means users are more likely to see (and believe) what their peers
share than what news publishers curate, making them less likely to encounter multi-faceted or
counter-attitudinal views.
Issue Two: Proliferation of several types of misinformation and disinformation
The use of false information to change public opinion is at least as old as the newspaper
Today, social media acts as an accelerant, and an at-scale content platform and distribution channel,
for what is now widely referred to as “fake news.” This much-maligned term actually comprises
several types of “dis”-information (the deliberate creation and sharing of information known to be
false) and “mis”-information (the inadvertent sharing of false information)
Some of these types of content present a higher risk to democratic discourse—e.g. it’s clear that
‘fabricated content’ is qualitatively different in intention and potential impact than satire or parody,
or, for that matter, false context. These types of information are created and disseminated by a
variety of actors—including state and private actors who often use bots — and often to different
ends.
Oxford University’s Internet Institute exposed patterns of automated fake news production recently
during their real-time investigation of political communication during the Brexit referendum in the
UK.
The low barriers to creation and distribution of online content have facilitated massive growth in
“news publishers” whose revenue models maximize attention and engagement with low regard for
quality control or traditional journalistic ethics.
A review of YouTube recommendations on the eve of the U.S. presidential election showed that
“more than 80% of recommended videos were favorable to Trump, whether the initial query was
‘Trump’ or ‘Clinton.’ A large proportion of these recommendations were fake news.”
Egregious instances of this phenomenon also include the Macedonian teenagers who mastered fake
news to generate pocket money: stories in The Guardian etc. revealed that the Macedonian town of
Veles was the registered home of at least 100 pro-Trump websites, many of them filled with
sensationalist, utterly fake news (the imminent criminal indictment of Hillary Clinton was a popular
theme; another was the Pope’s approval of Trump.) Automated advertising engines, like Google’s
AdSense, rewarded the sites’ ample traffic handsomely
Social network platforms have huge incentives to accommodate the creation and distribution of
content and feed the “attention economy.” And, unlike regulated media, there are no real
consequences to these networks for distributing fake news (except in Germany, where lawmakers
recently passed the much-debated Network Enforcement Act, which allows fines of up to $57M
against social media companies that don’t remove “obviously illegal” content on their sites within 24
hours). Fake news helps maximize ad-click revenue by keeping users on the platform.
Issue Three: Conflation of popularity, legitimacy and user intentionality
Over the last two decades, technology companies have spent a huge amount of money and effort to
develop ways to get people to trust each other online, in conversations and transactions, on various
platforms and marketplaces. Social media takes this to the next level—doubling down on the age-old
locus of trust, reputation, and belief in one’s networks
One of the arenas in which this has serious consequences is on platforms where all individuals can
publish without meaningful editorial insight, and where polarization has led to echo chambers.
Crowdsourced discussion platforms, including ones such as Wikipedia, Quora, and Reddit, further
blur the lines between specialists and the layperson, creating false equivalencies. In the U.S., the
crowd-sourced information phenomenon is now tied into part of a larger narrative and growing
backlash against experts and elites, who are viewed having a self-serving agenda.
Crucial to how users consume information is the algorithmic logic of certain social media platforms
and the way they engineer viral sharing in the interest of their business models. The non-neutral
algorithms of Facebook and Twitter actively use selection criteria to enhance the visibility of certain
information. What’s highly problematic about this is that the criteria attribute legitimacy to
popularity; thereby flooding the public with multiple, competing, and Un-verified assertions. This
isn’t just restricted to Facebook and Twitter; a variant of the problem exists at Google, where “auto-
fill search terms” assume user intentionality and conflate this with interest.
Issue Four: Political capture of platforms
Open networks, such as Twitter and YouTube, are particularly vulnerable to political capture by
populist leaders—and their armies of trolls and automated bots—for motivated use: e.g. to shut
down dissent and minority voices, create the false impression of momentum, intervene across
borders, or manipulate public sentiment. Of course, it is not just politicians and political parties that
can capture platforms—even established democratic governments have spent public money to
manipulate opinion over social media.
The British Army, which announced in January 2015 that one of its brigades would “focus on non-
lethal psychological operations using social networks like Facebook and Twitter to fight enemies by
gaining control of the narrative in the information age.” In other words, “The primary task of this
unit is to shape public behaviour through the use of ‘dynamic narratives’ to combat the political
propaganda disseminated by terrorist organizations.
Trolls and bots disguised as ordinary citizens have become a weapon of choice for governments and
political leaders to shape online conversations in many illiberal regimes and movements.
Governments in Turkey, China, Israel, and Russia are known to have deployed thousands of hired
“social media operatives” who run multiple accounts and manage bots to shift or control public
opinion.
In Myanmar, several of the government’s official Facebook accounts propagate exclusionary and
hateful views about Muslim minorities
Prime Minister Modi offers a direct-to-your-phone tweet service as part of the Digital India initiative,
and he has used Twitter to make significant policy announcements—such as the recent statement
on demonetization, which bypassed the conventional route of first presenting a major new policy to
Parliament.
The issue is worsened through automation. Automated bots can create the false impression of
momentum at scale: highly automated accounts, defined as accounts that tweeted 450 or more
times with a related hashtag and user mention, generated close to 18% of all Twitter traffic about
the 2016 U.S. presidential election.
There are also implications for national sovereignty, as the uncertain geographic origin of bots
facilitates their use by foreign governments or groups seeking to intervene in another country’s
political process. In some ways this undermines a crucial premise of democracy where processes are
set up to give voice to the people. Malevolent bots allow certain voices to be amplified
disproportionately to manipulate public sentiment. Moreover, few regulations or protocols protect
users against trolling and online abuse, and social media companies are not always quick to—or
equipped to— respond globally. The Myanmar ICT for Development Organization (MIDO) found that
only 10% of the posts it reported as hate speech on Facebook were ultimately removed.
Issue Five: Manipulation, micro-targeting, and behaviour change
Unprecedented personal data captured by social media platforms enables sophisticated
psychographic or behavioural profiling and micro-targeting. Social media companies today have
troves of data that any government would like to access, and, in some cases, successfully have. One
use case for this micro-targeting is “dark advertising,” in which political advertisements are shown to
Facebook users without being subject to traditional regulatory guidelines.
The absence of appropriate regulation that allows widespread surveillance and collection of user
data has extended into democratic election campaigns. The continuous surveillance of users through
feedback mechanisms built into the structure of social media (retweets, likes, comments) and the
underlying algorithms allow campaigns to adapt strategies and messaging in real time. For instance,
the Trump campaign was measuring responses to 40–50,000 different variants of ads every day,
then adapting and evolving their messaging based on that feedback. And unlike TV and print, where
political ads are declared, online political messaging can be more amorphous, often appearing more
like news reporting or user messaging.
The New York Times reported on Facebook’s disclosure that “it had identified more than $100,000
worth of divisive ads on hot-button issues purchased by a shadowy Russian company linked to the
Kremlin. Most of the 3,000 ads did not refer to particular candidates but instead focused on divisive
social issues.
Social media platforms have become a vastly preferred channel for digital advertising spend; this is
even more so the case for mobile advertising. In turn, this has further separated publisher and
journalist, cannibalized the main revenue sources of traditional news organizations, and depleted
their ability to produce quality news, hold the powerful to account, or to be gatekeepers of the
quality and terms of discourse. As a result, the monetization of our attention is undermining
journalism, which is traditionally a contributor to accountability and, ultimately, democracy.
Issue Six: Intolerance, exclusion of disadvantaged or marginalized voices, public humiliation, and
hate speech
In Myanmar, where there is much instability and violence within the country, major portions of the
content on Facebook—a significant source of news for locals—are said to be divisive and hateful.
With respect to disadvantaged or minority voices, demeaning hashtag labels have become an
organizing mechanism to target a certain group, e.g., “snowflake” for liberals in America; “libtard”
(liberal+retard)
The profusion of such content, and the relative lack of grievance and redress options, can result in
significant norm-shifting over time as to what is acceptable and allowable. Constant online abuse can
lead members of frequently targeted groups—women and minorities—to self-censor or opt out of
participating in public.
CONCLUSION
Social media platforms are ingrained in our daily lives and provide much of the infrastructure of
democratic debate. They have essentially become the modern “public square,” and they have
command over both our attention and much of our personal data.
Platforms have struggled to remove content judged to be problematic for several reasons; given the
difficulty of setting up normative codes, content standards, and policing capacity that would allow
for different outcomes. Such efforts— whether for terrorism, bullying and violent language, or
targeted trolling and hate—create challenges for platforms that are reluctant to determine who
adjudicates speech, where to draw the line regarding freedom of expression, and how to
differentiate legitimate dissent from calls to violence or bullying.
While increased oversight might strengthen long-term trust and engagement, it might also come
with nearer-term attention costs that would require mitigation.