Featured Posts
Are you hiring a real person in a WFH (work from home) environment?
Gone are the days of dropping off a resume to your local branch or office – recruitment has almost exclusively gone online, making it easier than ever for businesses to connect with talent from around the world. However, this new convenience comes with a set of challenges, one of which is the risk of deepfakes or job outsourcing.
Deepfakes, powered by generative artificial intelligence (GenAI), are synthetic media in which a person's likeness is copied or created from scratch. In the context of HR or recruitment, imagine interviewing a WFH (Working from home) candidate over a video call, only to find out later that the person you were speaking to was a deepfake, and the ‘person’ doing the job was almost exclusively outsourcing it to Gen AI tools. This might sound like science fiction, but with the easy accessibility of Gen AI, it's a new possibility that recruitment and management professionals need to be aware of when it comes to deepfakes and AI.
To computer programmers and those who know a little about automation, this idea isn’t new. Many people for years have either automated part of their jobs for efficiency or outsourced tasks that they didn’t otherwise know how to do, or have time for. What is the ethical stance on this practice at your organisation, especially in the context of deepfakes and AI?
Are employees using AI securely?
It’s highly likely that Gen AI is already being used at your organisation. Even simple SaaS platforms, for example, Google Search have used AI in some way for their algorithms for many years. With more user-accessible AI tools, employees need to be aware of the potential risks associated with Gen AI, such as privacy concerns, and the misuse of AI technologies, including deepfakes and AI. Companies should provide training and guidelines to help employees use AI tools responsibly and safely.
Simple guidelines such as:
- Access by request and approval
- No employee, customer or personal data can be entered
- No use of AI/LLM material in customer facing reports
- All output is considered a draft
Use of AI tools such as GPT-4 for first drafts, inspiration, and thought starters are generally fine (check with your organisation) but it bears keeping in mind that they should not be relied upon as ‘facts’ due to the hallucinatory elements of some of the outputs. Find your own citations before writing up your final draft.
Understanding BEC and VEC dangers
Business Email Compromise (BEC) and Vendor Email Compromise (VEC) are two significant threats that organisations need to be aware of. BEC is a type of scam where an attacker impersonates a senior employee from within the organisation and attempts to trick an employee or customer into transferring funds or sensitive information. VEC is similar, but the attacker impersonates a vendor or supplier.
Generative AI can make these scams more convincing, including the use of deepfakes and AI. For example, it can be used to create deepfake audio or video that makes the impersonation more believable. This is a significant risk, especially for organisations that have many employees working remotely or from home and relying on digital communication.
How do we mitigate the risk?
While the risks associated with deepfakes and Gen AI are real, there are steps that organisations can take to protect themselves.
- Education and awareness: The first step is to educate employees about the risks of deepfakes and Gen AI use. This includes training on how to identify deepfakes and scams.
- Regular contact with remote workers: Encourage collaboration, and in-person meetups where possible. Before hiring, ensure that references are checked diligently.
- Secure communication channels: Use secure and verified communication channels. Encourage employees to double-check email addresses and contact information.
- Verification processes: Implement verification processes and procedures for financial transactions or the sharing of sensitive information. This could include multi-factor authentication or requiring a second person to approve transactions.
- AI detection tools: Use AI to fight back. There are Gen AI-powered tools that can help detect deepfakes and other AI-generated scams. Note that these are also in their infancy and should be used with caution.
While deepfakes and Gen AI present new challenges for HR and management, these can be mitigated with the right knowledge and tools. By staying informed and proactive, professionals can navigate these challenges and continue to foster a safe and productive work environment, keeping the dangers of deepfakes and AI in check.