March 2025
The BSD Information Security office will be hosting a Secure Destruction Event for all BSD, UCM and University employees to ensure the safe disposal of e-waste and sensitive paper records. Yes, we are accepting your home computers! Proper disposal of computing electronics and paper records helps protect sensitive information and reduces security risks. Please participate and help keep our data safe!
Hard Drives will be removed from systems, and a picture of the serial number will be taken at the time of arrival. Hard drives will be kept in locked storage bins. After the event, the vendor will be shredding the hard drives and a certificate of destruction along with a manifest will be provided to us certifying their destruction.
Dates:
Wednesday, April 2, 9:00AM-3:00PM
Thursday, April 3, 9:00AM-3:00PM
Friday, April 4, 9:00AM-3:00PM
Location:
Surgery Brain Research Pavilion in Room J103
For paper destruction, please ensure to remove any metal binder clips. Staples are ok. Please feel free to contact security@bsd.uchicago.edu if you have any questions.
Tax season is here again! Everyone’s favorite time of year, right? While we are gathering receipts and crunching numbers, scammers are hard at work too. This month, we’re diving into tax scams, the sneaky ways cybercriminals use advanced tech and social engineering and also want to inform you on how AI tools fit into the mix. The more we know, the better our data are protected. As always, scammers exploit deadlines and urgent needs to pressure victims into handing over sensitive information or making unauthorized payments. This tactic isn’t new, but what is new is the increasing reliance on digital communication. Cybercriminals are now leveraging artificial intelligence and automated systems to make their attacks more convincing, easier to deploy at scale, and harder to detect. As a result, individuals need to be vigilant, recognize red flags, and ensure they are engaging with legitimate tax authorities to avoid falling prey to these dangerous financial scams.
The “Dirty Dozen” as reported by the IRS, includes scams such as:
- Email phishing: Fake emails that appear to be from legitimate organizations, including the IRS, to steal personal information.
- Smishing: Similar to phishing, but through phone text messages.
- Phone scams: Fraudsters posting as IRS agents call to demand immediate payment.
- Social media scams: Using social media platforms to spread false information about tax credits or refunds.
- Fake charities: Scammers create fake charities to exploit people's generosity.
- Inflated refund claims: Promoters promising large refunds based on false information.
- Tax return preparer fraud: Dishonest tax preparers who commit fraud or steal information.
- Offer in compromise mills: Companies that mislead taxpayers about their ability to settle tax debts for pennies on the dollar.
- Fake payments with repayment demands: Scammers send fake refunds and then demand repayment.
- Payroll and HR scams: Fraudsters targeting payroll or human resources departments to steal employee information.
- Ransomware: Malicious software that locks files by encryption until a ransom is paid (no guarantee that the files will be released).
- Misleading tax avoidance schemes: Promoters pushing illegal schemes to avoid paying taxes
The Top Statistics
- In 2024, the IRS Criminal Investigation unit uncovered over $9.1 billion in tax fraud and financial crimes, nearly double the $5.7 billion in 2022
- 56% of individuals have encountered AI-powered tax scams featuring realistic voices
- 81% of people reported being financially impacted after falling victim to a tax-related scam
The most used GenerativeAI technology/techniques used by cybercriminals:
- Phishing – Fake messages are made more personal and credible through the use of AI, which incorporates data found in the public domain into email communications.
- Deepfakes – Realistic fake videos and recordings impersonating individuals
- Voice cloning - Mimic the voices of trusted individuals to make fraudulent calls or make voicemails seems genuine.
Last year, HR and Finance departments were target by scammers using AI-generated emails and voice deepfakes to obtain sensitive tax documents, such as W-2s, by claiming the documents had incorrect information and required updates. This information is then used to file fraudulent tax returns or steal identities. These scams highlight the importance of being vigilant and using security tools to protect personal and financial information during tax season.
AI Fools with the use of a few tools…
AI Generated Phishing
Phishing attacks have long been recognized to be a dangerous threat. The tactic of pretending to be a trusted entity is highly effective but is usually limited by the fact that personalizing a message to a specific person requires research that takes a large amount of time and effort by the scammer. However, Generative AI tools excel at these tasks. AI Generated text removes the typical signs of phishing:
- Obvious typos/misspellings
- Bad grammar
- Awkward phrasing/too casual
- Strange fonts and poor formatting
- Low quality unprofessional images
- Over the top threats are now more subtle and psychologically reasonable
AI can be used to create specialized messages that impersonate a person or organization to a degree beyond what is normal for a phishing attack. As an example, an attacker could scrape the messages posted on the social media of an employee and use those messages to train an AI to create an email to that employee’s boss that uses their typical vocabulary, mannerisms, and grammar to convince us to share confidential information. Another method could be that an attacker could look up a name, copy paste the search results into an AI chatbot, and prompt for templates that target that demographic. A similar strategy was used to generate the template below, using the free chatbot Microsoft Copilot in just a few minutes:
With little effort, cybercriminals can generate these ‘Mad Libs’ style templates with no cost, and in very short periods of time. Cybercriminals are constantly sending out phishing emails to increase their chances of getting a bite, and so the rise of tools that can drastically decrease the amount of time it takes to create a personalized message has a big impact. Additionally, AI lowers the barrier of entry for cybercriminals. Before, creating an effective phishing email required language skills and knowledge of social engineering, but now individuals with little experience can generate highly targeted scams with minimal effort. Phishing emails will become more effective over time, so it is more important than ever to remain vigilant of where an email is coming from, what that message is asking you to do, and whether that offer is too good to be true.
AI Voice Cloning and Deepfakes
Impersonation through text is already an effective tactic for scammers, but what if they could pretend to be someone you know while over the phone, or through a video? With just 30-60 seconds of audio or video data, which can be easily accessible through social media, voicemail recordings, or even just a short phone call to the person they seek to impersonate, an attacker can receive enough data to create convincing copies of a person’s voice or face. A prevalent scam often begins with a distressing phone call or voicemail, seemingly from a trusted loved one, requesting sensitive information or financial assistance. The familiarity of the caller's voice can trigger an instinctive, immediate response, leaving individuals vulnerable to exploitation. If in doubt, especially if it is coming from an unknown number, always hang up and call the individual directly.
Perhaps one of the most concerning facts about these types of attacks is how accessible these tools are. Especially regarding voice cloning, there are multiple examples of free services and software out there that can be utilized in these types of scams. These tools are often very user friendly, only requiring that the user uploads the audio files of the voice they’re copying, and to write down what they are going to say. The same applies to deepfaked videos - the software is publicly available, and is only growing more accessible and effective over time. It is easy to imagine this type of attack as in the realm of science fiction, but not only it is real, it is easy for a person with little technical experience to execute.
A quick google search reveals multiple examples of free, open source software for Voice Cloning
The defense around these styles of attack is to establish code words, a pre-agreed word that can be used to verify a person’s identity. These words should be established with groups, such as your family, close friends, and coworkers -entities that are likely to be impersonated by scammers. Choose words at random and keep them private between both of you. While it is tempting to simply ask the person on the other end personal questions only the two of you could know, if not established beforehand it is difficult to know if the other party has shared this information or not. Deepfake attacks can be sophisticated and powerful, but with a cool head and prepared precautions they can be easily defended against.
AI Data Privacy
We’ve discussed how attackers take advantage of AI tools, but let’s discuss how we should be using them. Generative AI can be a very effective tool, one that can greatly improve productivity. But using these tools carelessly poses significant risks. One of the greatest risks comes from exposing sensitive personal or company information: not only do we have to trust the host of an AI tool with the information we give it, but because Generative AI continuously learns from prompts given to it, it is possible for information we provide to be exposed to other users. When using Generative AI, be sure to follow these instructions:
- Think before you share: Treat AI chatbots like a public forum. If we wouldn’t be comfortable sharing that information on social media, then it shouldn’t be shared with a chatbot.
- Use generic queries: Instead of using confidential information, use general, nonspecific questions in the prompts.
- Review the University Guidelines: The University of Chicago provides guidance on the use of AI tools, and requires security reviews if confidential information is to be used with them. Before using Generative AI tools, look through the guidelines at this link: https://genai.uchicago.edu/about/generative-ai-guidance
We hope this information has been helpful. Hopefully during this tax season and AI Fools, you will be more aware of the tactics that cybercriminals use.
If you have any topics you would like us to write about in our newsletter, please feel free to drop us a line and let us know by e-mailing security@bsd.uchicago.edu .