Recruiters warned of deepfake scams amidst Grok AI controversy
As AI continues to develop at pace, new and increasingly malicious uses of the technology are coming to light – in particular, deepfakes, which blur the line between reality and sinister illusion. Rebecca Napier, (pictured above), IT Business Partner at Gi Group, a global leader in HR and recruitment, warns of the serious risks posed for recruitment and HR teams.
A deepfake is a form of synthetic media whereby artificial intelligence is used to create highly realistic but fake images, videos, or audio recordings. It typically involves replacing a person’s face or voice with someone else’s, making it appear as though they said or did something they never actually did. While the media has been used for entertainment purposes, they also raise serious ethical and security concerns, particularly when used to spread misinformation or impersonate others.
Most recently, the European Commission and Ofcom launched an investigation into Elon Musk’s X platform after it emerged that its AI feature, Grok, could be used to generate highly inappropriate deepfakes of real people.
Napier said: “The recent headlines around Grok AI being used to generate inappropriate deepfakes should set off alarm bells for those in recruitment and HR teams. We’re now in an era where we can no longer assume that what we see online is real.
“HR teams and recruiters need to be alert to this when it comes to matters of the workplace and codes of conduct, as often these issues are much more complex and sensitive than first meets the eye.”
Research by the Office of the Police Chief Scientific Adviser and conducted by Crest Advisory shows that 67 per cent of people believe they have already seen, or may have seen, a deepfake online. The UK Government also projected that 2025 would see approximately eight million deepfakes created and shared, up from 500,000 in 2023.
Napier continued: “The growing accessibility of deepfake technology is particularly worrying and from an IT perspective, the potential impact on recruitment and HR processes is huge. Deepfakes can be used to impersonate candidates during virtual interviews, fake employee misconduct, or even manipulate internal communications. As AI capabilities continue to evolve, the opportunities for reputational damage and fraud are only going to increase.
“During the recruitment process, it may be that HR personnel, recruiters or even team heads and potential line managers review a candidate’s social media to get a deeper sense of their authentic thoughts and feelings, as well as the language they use or how they engage with others. But due to the constant use of deepfakes, vigilance needs to be used.”
Napier highlights what to look out for to spot AI generated video content: “When assessing video content, it’s important to look for common warning signs such as unnatural eye movement, blurred facial features, poor lip-syncing or inconsistent lighting. Beyond this, organisations should consider introducing stronger identity verification measures during online interviews and internal calls to reduce the risk of impersonation.
“There are also deepfake detection tools available, which can add an extra layer of protection. As incidents like those linked to Grok show, deepfakes are a very real issue and businesses need to get policies in place now for how to approach potential deepfake incidents.”
Add your comment
- Administration 1
- Building Design, Planning, Development 21
- Catering 7
- Construction 71
- Contracts, Projects, Bids 35
- Energy Management 119
- Engineering, Maintenance 646
- Estates, Property 45
- Events 5
- Facilities Management (main) 300
- Graduate, Apprenticeship & Intern 14
- Hard Services 55
- Health & Safety 1
- HVAC 273
- Management 2
- M&E 262
- Operations 118
- Procurement 20
- Sales & Marketing 5
- Soft Services 4