Menu

Beware of AI Scams!

From revolutionizing industries, like healthcare and finance, to replacing jobs in publishing and graphics, Artificial intelligence (AI) is changing the world. Unfortunately, scammers are also using AI to con victims out of their money and personal info. Here’s what you need to know about AI scams and how to protect yourself. 

Types of AI scams

AI scams come in many forms. Here are some of the more common.

  • Deepfake scams. In these scams, fraudsters use AI to create realistic videos or audio clips, often mimicking real people. Scammers use deepfakes to impersonate business executives, family members, political figures or celebrities to trick people into transferring money or revealing sensitive information.

  • AI-powered phishing emails. Scammers use AI to craft personalized and convincing emails that mimic legitimate organizations. These emails often contain fake links or attachments designed to steal personal or financial information. 

  • Chatbot impersonation. Here, scammers deploy AI-driven chatbots to impersonate customer service representatives or company officials. These bots engage in real-time conversations, persuading victims to share sensitive information or make payments. 

  • AI voice cloning. This uses AI to replicate someone’s voice, typically a family member or close contact. Scammers use this cloned voice in phone calls to request urgent financial help. 

  • Job offer scams. In these scams, fraudsters use AI to scrape data from job boards and LinkedIn profiles before targeting job seekers with fake offers. They use automated systems to conduct interviews and request upfront fees. 

How to spot AI scams 

Don’t get caught in an AI scam! Here’s how to spot one: 

  • Unusual requests for urgency. If someone is demanding immediate action, such as transferring money, pause to verify authenticity.

  • Inconsistencies in communication. Check for inconsistencies in tone, grammar or details that don’t align with the purported sender’s usual style.

  • Requests for personal information. Legitimate organizations rarely ask for sensitive information, like passwords, Social Security numbers or credit card details, by email, text or phone call.

  • Unverified sources. If you get communication from a new email address, phone number or chatbot, cross-check it with official contact details found on the organization’s website.

How to protect yourself

  • Verify before you act. Always double-check any request for money or personal details.

  • Strengthen cybersecurity. Use strong, unique passwords and enable multi-factor authentication (MFA) for your accounts. Keep your software and devices updated to protect against vulnerabilities.

  • Be cautious with AI tools. Avoid sharing sensitive information with AI tools or chatbots unless you are certain of their legitimacy and security.

  • Educate yourself and others. Stay informed about the latest AI scams and share your knowledge with others.

  • Monitor your accounts. Regularly check your account and credit card statements for unauthorized transactions.

  • Use anti-scam technology. Install reliable antivirus software and consider tools specifically designed to detect and block phishing attempts or deepfake content.

  • Report suspicious activity. If you encounter an AI scam, report it to local authorities, the Federal Trade Commission (FTC) and/or other relevant organizations.

Stay safe!

Kyle Trondle