AI: The Necessary Evil?

Anusha Singh Wednesday 02nd October 2024 04:38 EDT
 
 

Keeping pace with our rapidly advancing world is a necessary evil, and different segments of society find it difficult to keep up. For some, this race is manageable, but for others, it's a struggle. The elderly, in particular, often grapple with the complexities of technology and digital media. Meanwhile, women and other vulnerable groups face heightened risks from negative societal elements, and as technology evolves, being digitally literate doesn’t always provide the protection they need.

The rapid advancement of generative AI (gen AI) has prompted regulators worldwide to rush to comprehend, control, and ensure the safety of this transformative technology, all while preserving its potential benefits. As industries increasingly adopt gen AI, a new challenge arises for risk and compliance functions: balancing the use of innovative technology within an evolving—and often inconsistent—regulatory framework.

The recent trailer for Ananya Pandey’s upcoming Netflix thriller ‘CTRL’ serves as a chilling reminder of the dangers posed by technological advancements. While fictional, it offers a glimpse into the dystopian future we could face if we fail to keep these developments in check. The swift breakthroughs in AI have intensified pressure on rule-makers to keep pace with this ever-evolving landscape. 

Moreover, the misuse of AI can profoundly impact community safety, cohesion, and trust, especially during festive periods that are meant to foster joy and unity. During such times, communities often let their guard down, making them particularly vulnerable to attacks and fraud.

Elderly at greater risk

Dr Harjinder Singh Lallie PhD., MPhil., MSc., SFHEA, M.Inst.IP is a professor of Cyber Security at the University of Warwick. He highlights that while all sectors of society are vulnerable to cyber threats, those lacking technological knowledge are particularly at risk. "People with some tech understanding can apply that when making decisions, but those without it often struggle. Their judgment can be impaired because they don’t have the foundational knowledge to make informed choices," he explains.

This is especially true for the elderly in the South Asian community, who may not be as familiar with modern technology. As the festive period approaches, many elderly individuals are eager to participate in community activities and contribute to charitable causes. With so much now conducted online, scammers increasingly target this demographic in the name of charities or temples. Even those who are technologically savvy can fall victim to these scams, so it’s unsurprising that older members of the community are often at risk.

Age UK, a leading charity supporting older people, shared concerning statistics from their research. Nearly one in five people over the age of 50 in the UK—around 4.9 million individuals—express fear of answering their phones due to scam concerns. Similarly, 2.8 million people in this age group are afraid to open their doors for the same reason. According to the Crime Survey for England and Wales, an average of four people aged 50 and older are scammed every minute, and Age UK suspects the real figure could be even higher due to underreporting.

While scams can affect anyone, Age UK warns that older people, particularly those living alone or with cognitive impairments, are at greater risk of falling victim to certain types of fraud. Financial losses are common, but the impact often extends beyond money. Many older victims experience feelings of shame, embarrassment, depression, social isolation, and even physical health decline. Some lose their independence entirely after being scammed.

Caroline Abrahams CBE, Charity Director at Age UK, commented on the issue: "Many scammers are highly sophisticated criminals, and it’s easy for anyone to be tricked. However, older people sometimes face unique risk factors, including social isolation, limited digital literacy, and cognitive impairments, which make them prime targets for fraudsters."

She continued, "We’re proud that Age UK’s scams programme has reached so many older people in local communities, helping to empower them against this growing threat. However, a much larger national effort is needed to protect older people from this insidious crime. While our programme offers a blueprint, this kind of support must be more widely available, which is far from the case today."

Women being targeted as government struggles to keep up

The rise of technology has made cybercrime against women increasingly prevalent, highlighting that it is still far from being a world where women can feel completely safe online. It is estimated that one in ten women has been a victim of cyberviolence since the age of fifteen.

Festivals like Navratri, which emphasise socialising and connection, often see younger generations quickly exchanging social media information. However, one poor decision can expose women to various cybercrimes, including online harassment, cyberstalking, revenge porn, doxxing, and identity theft. The anonymity and vast reach of the internet allow perpetrators to target women with ease, leading to emotional, psychological, and even physical harm.

At the same time, UK's digital economy is facing growing threats from cybercriminals and state actors, with essential public services and infrastructure increasingly under attack. In the past 18 months, hospitals, universities, local authorities, democratic institutions, and government departments have all been targeted.

Recent cyberattacks on the NHS and the Ministry of Defence highlight the severe consequences such breaches can have. Our current laws have struggled to keep up with rapid technological advancements and if even powerful government organisations and essential institutions are struggling to fend off these threats, it underscores how vulnerable women and other marginalised members of society are.

Dr Harjinder expressed his concerns about how AI is intensifying cybersecurity challenges, stating, "A major problem is that cybercriminals are now using AI to launch more sophisticated attacks. In the past, preparing and monitoring these attacks would take two to three days. Attackers needed time to configure their strategies and analyse the results to gauge their effectiveness. But with AI, these processes can now be completed in just a matter of hours. AI can develop, launch, and analyse attacks in real time, making it far more difficult for us to defend against them."

"On the positive side, we are also leveraging AI to detect and respond to attacks more quickly and efficiently. Both sides are utilising AI, but at present, cybercriminals seem to be slightly ahead in terms of sophistication. To close this gap, we must make significant investments in research, development, and education to enhance our capabilities. Ultimately, it's crucial that we not only match their level of sophistication but stay two steps ahead to effectively counter these threats."

Commenting on what steps the government has taken to boost cybersecurity and the work being done to protect communities as the festive season approaches, a spokesperson for the Department for Science, Innovation, and Technology said, “Whether staying in touch with loved ones, using the web for shopping, or keeping up to date with the latest headlines, more and more of our lives are being spent online. However, the threats we face are also growing, with people becoming more determined than ever to find new opportunities to disrupt our digital lives and take advantage of our communities.

“We won’t leave people out in the cold this festive season, and our New Year’s resolution is to introduce a Cyber Security and Resilience Bill in 2025 which will make our online world more secure than ever. The government’s Stop! Think Fraud campaign also offers free advice to consumers and businesses preventing fraud and cyber-crime, ensuring the UK has the knowledge, tools, and expertise we need to keep cyber criminals at bay and our communities safe for life, not just for the festive season.” 

It is essential for community leaders, policymakers, and technology developers to collaborate in establishing safeguards that promote the responsible use of AI. This ensures that the advantages of technology are maximised while mitigating its potential harms. As governments and regulators seek to define an effective control environment, the developing approaches remain fragmented and frequently misaligned, creating significant challenges for organisations navigating this uncertain terrain.


comments powered by Disqus



to the free, weekly Asian Voice email newsletter