We have entered into an era where the most dangerous explosion is not the one that destroys the buildings or kills hundreds of people, but the one that detonates into the billions of minds with no time. It is now everywhere, in everyone’s access. This is terrifying but a new reality: generative AI has not just entered our live but dangerously reshaped them. AI is making its way dangerously in every aspect of life. Every technology has both good and bad impacts on people, but whenever technologies have been indulged in the defense field; their effects have automatically become devastating.
When we look into the current security environment across the world, it has been totally changed from traditional to non-traditional warfare. Traditional warfare, which involved the different conventional techniques with the use of manpower, has now totally transformed into 5th Generation Warfare (GWF), which has already been launched on the tactical level, where the battlefield is perception itself. The objective of 5th GWF is not to conquer the land or to defeat any military, but to destroy the enemy’s ability to form a coherent picture of reality. Key characteristics of 5th generation warfare include minimal violence or absence of violence, weapons being narrative, and the most important part being when the target population voluntarily self-destructs the reality. One of the most dangerous parts of this warfare and digital landscape is that defense is nearly impossible because whenever anything circulates on media within minutes and hours, it has already shaped people's perception before the counter-response comes.
One of the most recent manifestations of 5th GWF with the use of AI content came to the forefront, which could possibly leave a devastating impact not on the ground level but in the minds of people. On the evening of 10th November, a car exploded near a gate of Delhi’s Red Fort, which took the lives of around 13 people. Within the next 6 hours of the blast, a confessional video with perfect Hindi language from Jaish-e-Muhammad starts circulating on WhatsApp and national television in India. By the next 12 hours, the video had been seen by 40 million people. But what makes it more interesting is that after 24 hours, many of the AI detection tools marked it 99.9% synthetic. Here comes the main argument: how rapidly- if made- AI video circulated on digital platforms, and before its authenticity was confirmed, it had already colonized millions of minds. People don’t even believe the government’s statements when the truth is revealed; many stated that the government might want to hide something (reality) from the public because people believe what they first observe. Many people may label this episode as journalistic negligence, but it could be a fully matured demonstration of AI as a decisive enabler of the 5th GWF.
Before 2023, or when the 5th GWF was introduced, it remained expensive, and many of the people were not even aware of it. It was mostly done by spreading fake news and using photoshops, which takes days, and those who are less vulnerable to technology were less effective. Now, generative AI collapsed that time duration from days to minutes and its accessibility from costly to almost free. In the case of the Red Fort attack, there were eight people who died, but in the cognitive domain, the blast had fractured the minds of millions of Indian people. Those minds are permanently fractured with collective perception building. Fact-checking organizations in India debunked the video within 18 hours, which was a quick response, but it doesn’t matter the most. What actually matters the most is the number of people who had already formed their beliefs in the starting hours- three to six hours- when no one was even aware of its authenticity, not even the fact-checkers.
The AI technology in the digital landscape has devastating effects, and everyone is under the same threat. In upcoming attacks, any terrorist attack can be linked to any region or nation before the fact comes on screen. An election can be politicized before the few hours of election, when the damage control will be impossible. Political leaders can be threatened by using generative AI videos and photos, and their image can be spoiled.
While checking the facts, it is very obvious that the Delhi blast was a perfect manifestation of 5th GWF with the demonstration of generative AI. The timing of the blast was right before the Bihar election, and putting blame on the enemy state-Pakistan could help to win the election because the majority of Indian people do not have good sentiments for Pakistan. The narrative it gave across the country was that Islamic terror and Pakistani hands were behind the attack, which instantly created polarization in poll-bound states. The narrative was not only spread verbally and on social media platforms, but many mainstream media (TV channels) aired the fake video for hours to give it institutional credibility (but in reality, it raises questions over institutional credibility). At a similar time, many fake and conspiracy accounts started commenting that the government is hiding the real story, further helped to keep the intended narrative alive. In 5th GWF, truth only matters in a version that people want to believe. Similarly, after the video was proven fake, millions of people genuinely believed that a Pakistani terrorist confessed to the attack.
The Delhi attack is not the only significant case. In Manipur, from 2023 to date, fake videos of Kukis and Meitis (tribes) are prolonging the ethnic violence. In 2024, Bangladesh protests, AI-generated videos of Indian agents shooting students went viral, which led to anti-India protests. In 2020, during the Galwan clashes, deepfake audio of Chinese soldiers surrendering went viral before it was debunked in 48 hours. All these examples have left people’s minds with so many blurred concepts that are far from reality. India is not the only country which is going through this dangerous pattern but the emerging trend of generative AI content is taking lead across the globe. In other countries like in Taiwan during 2024 elections, deepfake audio of presidential candidate went viral before the elections. In another case, South Korea detected 129 deepfakes during the 2024 elections and banned them.
In 2025, a deepfake trend surge can be observed in India amid elections and tension. As per a report: India to lose 70.000 crore due deepfake frauds in 2025 alone. Tools such as voice changing and remixing are cheaply available, with India facing nearly 1000 reported cases monthly. Instead of taking responsible measures to control the misuse of technology among citizens, the government itself is exploiting it for political gains.
The Red Fort blast and its cognitive detonation is not only devastating but alarming for the whole Indo-Pacific region. States cannot counter the cognitive effects when the deaths of 13 people have fractured the minds of millions in few hours. India, with collaboration of regional states needs to initiate the real time forensic for deepfakes and a transparent pre-debunking scale for elections. To mitigate the threat, a regional framework will be needed. As the threat has globally emerged, a new trend of “Cognitive Task Forces” is needed to control its effects. India can also use the pre-debunking model like Israel and Taiwan to counter the growing threat of generative AI in 5th GWF because its effects on digital platforms cannot be restricted once starts targeting the common people. On institutional level, its proliferation can be controlled but if the state really wants to limit its effects otherwise the results could be devastating.
Aarav Sharma is a young political analyst and columnist with a deep interest in South Asian geopolitics, international diplomacy, and policy reform. He graduated from King’s College London with a focus in global governance and gives a comprehensive understanding of the relationship between home politics and foreign affairs. His work has appeared in youth-led political events and think tank publications. Aarav is passionate about narrowing the disparity among academia and policy making.