Gallagher Bassett’s director of risk consulting, Ashley Easen, highlights the digital fraud threats businesses face and how these can be countered
As society continues to rapidly evolve, driven by the increased usage of information technology, new opportunities for bad actors to exploit vulnerabilities have emerged.
One of the key risks for businesses is social engineering, a deceptive tactic used to manipulate individuals into divulging confidential or personal information for fraudulent purposes.
In the realm of information security, social engineering takes on two distinct methods. The first involves psychological manipulation, where individuals – such as an important client – are impersonated to lure targets into browsing malicious websites that infect their organisation’s workstations.
The second method utilises IT to obtain banking credentials through phishing attacks, ultimately leading to the theft of an organisation’s money. With the rapid increase of technology, these techniques have become prevalent in cyber attacks.
Social engineering exploitation
Various tactics are employed in social engineering attacks, including pretexting, baiting, quid pro quo, tailgating, water holing, phishing, spear phishing, honey trapping, scareware, whaling, pharming, and vishing.
These tactics exploit human vulnerabilities and trust, often targeting individuals through email addresses or using the information available on social media platforms.
To defend against social engineering attacks, organisations must prioritise training their staff to recognise psychological triggers and other identifiers.
Employees should be encouraged to be suspicious of unsolicited communications and unknown individuals. It is crucial to verify the source of emails, checking for spelling or grammar errors and double checking the sender’s name.
Opening suspicious email attachments should be discouraged and sensitive information should only be provided after appropriate checks have been made.
Additionally, organisations should maintain an established cyber threat strategy. This includes ensuring employees are aware of cyber threats, testing the effectiveness of guidance and training and reinforcing technological cyber security measures.
Deepfake threat
In recent years, the emergence of deepfakes has added another layer of complexity to the threat landscape.
Deepfakes are synthetic media generated using artificial intelligence (AI) and machine learning. This technology creates realistic videos, pictures, audio and text of events that never occurred.
Deceptive and misleading deepfake content can be used to spread false information, as well as create misinformation and disinformation, which poses significant risks to individuals, industries and societies.
The volume of deepfake content online has surged by 300% between 2022 and 2023, according to recent research. This alarming increase suggests that this trend will continue to grow, presenting even greater difficulties for organisations and societies as the technology becomes more accessible and affordable.
Deepfakes can be categorised into three types: deepfakes, cheapfakes and shallowfakes.
Deepfakes utilise deep learning techniques to create manipulated content, while cheapfakes are created using cheaper, more accessible software. Shallowfakes involve audio-visual manipulations created through video editing software, altering the appearance or speech of individuals.
To combat the deepfake threat, organisations must prioritise strategies to strengthen their reputation and address falsehoods. Recognising the signs of deepfakes, such as unnatural movements, facial expressions and body positioning, can help identify and mitigate their impact.
As society continues to embrace information technology, the risks of social engineering and deepfakes become increasingly prevalent.
Organisations must prioritise training and awareness programmes to equip staff with the knowledge and skills to identify and defend against these threats.
By implementing robust cyber security measures and staying vigilant, organisations can protect themselves and their stakeholders from the detrimental effects of social engineering and deepfakes.