From social opinions to political ideologies, biased news can facilitate or reinforce distorted ideas in the minds of the readers. Unfortunately, establishing bias isn’t solely attributed to humans.
Digital news sources like news websites, apps, and search engines use algorithms to collect news stories and offer them to readers. Though generally believed to be neutral, these algorithms can also promote bias and therefore have the potential to influence public perceptions.
The study by (Head et al., 2020) The age of ambient information is upon us. With the daily deluge of social media news, memes, opinion, propaganda, and advertising, worries about how popular platforms and the algorithms they use more and more could affect our lives, widen societal divides, and encourage polarization, extremism, and mistrust are growing.
Online information aggregators like Google and Facebook are gradually taking the place of traditional media outlets, making them a part of our society’s gatekeepers. These gatekeepers have lately begun to include personalization capabilities, or algorithms that filter information according to an individual, to deal with the expanding volume of information available on the social web with the load it places on the typical user.
A study conducted by Kim and Moon (2021) documented the failure to introduce algorithmic transparency to South Korea’s digital environment. The study reveals how the attempt resulted in challenges and controversies surrounding the use of algorithms in curating content like news stories.
Meanwhile, research by the Pew Research Center revealed that an increasing number of people prefer to receive news from digital sources. Hence, the role that news aggregation algorithms play is under greater scrutiny than ever. It is necessary to take a closer look at the algorithmic biases that are creeping into the process of news collection to understand how news aggregators are shaping public perceptions.
How Does Algorithmic Bias Enter Digital News Aggregation?
Algorithmic biases can enter the process in many ways, some intentional and some unintentional.
1. Human Biases
According to Bozdag (2013), human bias can affect the design of the algorithm and its operation. Such human biases may inadvertently enter the news aggregation process when the designers/developers are creating the algorithm.
These biases often influence how the platform selects and presents news stories. How these inputs shape public perceptions depends on the biases added.
2. Training Data
Biases may also enter the process through the data selected to train the algorithm. The training data may lack diversity, be incomplete, or have errors. Algorithms trained on such data will collect incorrect stories or be unable to provide stories with a wide range of perspectives. The resulting stories will often support limited viewpoints which can influence the public’s perspective.
3. Unreliable Sources
Verification of the credibility of news sources is another area where biases may seep into the process. This is because algorithms aren’t trained to verify news sources and may employ the wrong verification mechanisms.
As a result, users’ news feeds contain stories from unreliable or biased sources, which is the reason behind the spread of misinformation.
4. Personalization
When tailoring news feeds to match the reader’s interests, algorithms sort through sources based on content preferences, user behavior, location, demographics, and similar factors.
Personalization will often result in a sort of echo chamber effect where the reader will receive news stories similar to what they like and prefer to read. If the user prefers to limit themselves to a certain perspective, they’ll only experience a reinforcement of their existing beliefs.
5. Natural Language Processing (NLP) Systems
NLP systems can also reflect biases because they’re largely trained on human language models that are inherently biased.
We can see NLP biases more clearly in one study by Garimella et al., (2019)published by ACL Anthology. It contrasted the differences in text written by men and women. Since a majority of NLP systems in news aggregation use data from long-established news sources (between the 1980s and 1990s) during the training phase, these data sources were predominantly produced by a homogenous group that consisted of white, upper-class, middle-aged, educated men. It’s only natural that the resulting model facilitates racist, ageist, and sexist beliefs.
6. Media Bias
Algorithmic bias in digital news aggregation can also be intentionally incorporated by the media outlets that produce content. Media biases like these can enter the process due to ideological focus, political interference, and lobbyism.
Such biases can also enter the process due to the news aggregators’ reluctance to showcase the differences between related news stories. This can lead to the further perpetuation of biased storytelling that ultimately influences public perceptions.
One study by Aggarwal et al., (2020)reports that a significant amount of news tweets are subjective and work as opinion-conditioning agents. The case of the “Twitter Files”, also confirms this, wherein a series of documents reveal the content moderation discussions between Twitter employees over banned accounts.
7. Clickbait Content
Clickbait elements in headlines help increase the user engagement of the news aggregator platforms, which is one of their key priorities. When algorithms prioritize sensational content and attention-grabbing headlines, they naturally incorporate bias into the process. This is because clickbait strategies focus on using narrative and stylistic devices to attract the user’s attention. When users only consume the headlines without reading the full article, which is often the case, it may lead to misinformation.
8. Network or News Aggregator Bias
Since news aggregators are profit-based organizations, their algorithms prioritize content that aligns with their business interests. This will naturally lead to a biased representation of any subject matter, further shaping and reinforcing public perception to align with the platform’s views.
Various Forms for Algorithmic Bias and Practical Instances
Karteek. Y (2023) explained Alogirithmic bias in many fields such as;
- Bias in Data
The Street Bump app in Boston employed Smartphone sensors to identify potholes, but it was criticized for gathering biased data that resulted in an excessive representation of pothole reporting in affluent areas.
- Design Bias in Algorithms
Facial Recognition Systems: Because their algorithmic design is built on primarily lighter complexion and male data sets, systems such as IBM, Microsoft, and Amazon have been proven to have greater error rates when it comes to categorizing darker-skinned and female faces.
- Bias in Feedback Loops
Facebook has News Feed: The algorithm that determines what appears in users’ news feeds has come under fire for allegedly polarizing political opinions by echo chamber-producing users’ preexisting preferences and ideas.
- Bias in Pre-processing
COMPAS (Correctional Offender Management Profiling for Alternative Sanctions) Risk Assessment Tools: Because of the pre-processing methods used by the algorithm, it was discovered that the tool, which is utilized by the American legal system, is prejudiced against African Americans.
- Algorithmic Bias’s Effects on Particular Industries
Employing and Human Resources: Biases based on race, gender, or other characteristics can be reinforced by biased algorithms during the hiring and promotion process, which can result in a lack of diversity and unfair opportunities for employees.
Criminal Justice: Racial and socioeconomic inequities already present in law enforcement can be made worse by algorithmic bias into tools for risk evaluation and predictive policing, which can result in unfair treatment and the maintenance of stereotypes.
Healthcare: For specific patient populations, biases associated with medical algorithms may lead to incorrect diagnosis, unfair treatment, and restricted access to essential care.
Finances: Biased Algorithms inside credit scoring and approval of loan systems have the potential to uphold past discriminatory practices, preventing disadvantaged people from accessing financial services.
Education: Personalized platforms for learning and college admissions processes, for example, might reinforce preexisting inequities and restrict educational possibilities for underprivileged groups due to algorithmic prejudice.
Conclusion
The only surefire way for audiences/producers to identify biases in the news process is by actively searching for them. But tackling the problem of algorithmic bias in digital news aggregation requires a broader approach that focuses on introducing transparency, diversity, fairness, and accountability to the process.
on improving user awareness shows promising results from methods like bias visualization and results re-ranking. Another approach is to hold news aggregator platforms responsible for perpetuating bias. Ultimately, a multifaceted approach is necessary to adequately address and minimize algorithmic bias.
Today’s world is increasingly concerned about algorithmic bias as AI-driven technologies proliferate across multiple industries. We can endeavor to create more impartial and equitable solutions by comprehending the many forms of biases as well as their practical ramifications.
We can deal with algorithmic bias and promote fairness in AI-driven decisions in several ways, including deploying multiple data sets, bias-aware pre-processing, fair-aware algorithms, regular inspections, and encouraging interdisciplinary collaboration. Prioritizing justice and inclusivity is essential as we develop AI technologies to make sure that everyone, regardless of color, gender, or socioeconomic background, may take advantage of these advancements.