
Technological innovations to counter deepfakes in social media polarization and public discoursefor the sustainability of the Nation state : S.K. Singh, ex-Scientist- DRDO and Social entreprenuer
Radicalization is a complex process in which an individual or group adopts increasingly extreme political, social or religious views. It involves harmful attempts to change others to shift from moderate, mainstream views to extreme and potentially dangerous for society and individuals, A key aspect of radicalization in context of terrorism is the acceptance and justification of violence as a means to achieve political or ideological objectives. Social media platforms using Deepfakes are playing a key role in propagating false narratives leading to radicalization.
Deepfakes, an emergent type of threat falling under the greater and more pervasive umbrella of synthetic media, utilize a form of artificial intelligence/machine learning (AI/ML) to create believable, realistic videos, pictures, audio, and text of events which never happened.The threat of Deepfakes and synthetic media comes not from the technology used to create it, but from people’s natural inclination to believe what they see, and as a result deepfakes and synthetic media do not need to be particularly advanced or believable in order to be effective in spreading mis/disinformation.
Social media platforms can amplify public discourse polarization and radicalization through their use of algorithms that create echo chambers and filter bubbles, prioritizing engagement and extreme content. This dynamic contributes to the rapid spread of misinformation and hate speech, reinforcing existing beliefs and leading to ideological isolation. The resulting environment can marginalize moderate voices, foster hostility, and, in some cases, spill over into offline violence, deepening societal divisions and threatening democratic processes.
How Social Media Fuels Polarization and radicalization?
Algorithmic Reinforcement: Social media algorithms are designed to maximize user engagement by continuously showing content similar to what a user has previously liked or viewed. This creates personalized “echo chambers” where users are primarily exposed to ideas and information that align with their existing beliefs, reinforcing them and isolating them from opposing viewpoints.
Prioritization of Extreme Content: Platforms often reward engagement over factual accuracy, leading to a situation where more extreme, emotionally charged, or controversial content receives greater visibility, likes, and shares. This can elevate radical or divisive ideas, pushing moderate perspectives to the margins.
Misinformation and Disinformation: Unverified sources can be presented as reliable, and misinformation and disinformation can spread rapidly, often faster than factual information. This fuels polarization by blurring the lines between reality and fiction and presenting partisan narratives in different ways on different platforms.
Regression to Meanness: The combination of algorithmic curation and the spread of fake news can lead to a “regression to meanness,” where toxic and aggressive voices become more prevalent in public discourse, overpowering more balanced and nuanced perspectives.
Impact on Public Discourse
Ideological Bubbles: Users become trapped in ideological bubbles, curating their own information feeds and limiting their exposure to diverse perspectives.
Deepening Divides: Political communication on social media tends to accelerate division, and the intertwining of state-controlled media and social media can allow ruling parties to dominate narratives and deepen ideological divides.
Threats to National security: The proliferation of polarized content, misinformation, and hate speech, exacerbated by social media, poses a significant threat to national security.
Offline Consequences: The amplified polarization and hostility online can have spillover effects into the offline world, contributing to increased protests and even offline violence.
Counter-radicalization refers to the preventative measures and interventions aimed at preventing individuals from adopting extremist ideologies and engaging in violent extremism. It involves a range of strategies focused on addressing both the cognitive (beliefs) and behavioral aspects of radicalization. These efforts often include education, community engagement, and de-radicalization programs.
Any early warning arising out of radicalization in field and online narrative and intervention of Technologies to combat and counter radicalization will be of enormous help for National security and sustainability.
AI-generated text is another type of deepfake that is a growing challenge. Whereas researchers have identified a number of weaknesses in image, video, and audio deepfakes as means of detecting them, deepfake text is not so easy to detect. It is not out of the question that a users texting style, which can often be informal, could be replicated using deepfake technology. All of these types of deepfake media – image, video, audio, and text – could be used to simulate or alter a specific individual or the representation of that individual. This is the primary threat of deepfakes. However, this threat is not restricted to deepfakes alone, but incorporates the entire field of “Synthetic Media” and their use in disinformation.
The scope of deepfake detection technology is broad and vital, encompassing real-time social media monitoring, forensic analysis in law enforcement, content verification for news outlets, and intellectual property protection for the entertainment industry. As deepfake technology evolves, the scope expands to include more sophisticated AI methods like transformer-based architectures, multi-modal analysis (audio, video, text), and even brain-inspired neuromorphic computing to address challenges like energy efficiency and online learning. The core development areas focus on enhancing algorithm robustness against compression and network laundering, improving model interpretability for trust, and creating more diverse training datasets to counter evolving fakes. – SK Singh (Picture courtesy from Google)