July 2025
Protecting yourself and our information from cybersecurity threats.
Let’s be honest… the digital world is fast moving and doesn’t slow down. Artificial intelligence (AI tools) is/are reshaping how we work, social media is constantly in our faces, and somewhere in the background there is a nagging notification from our system asking us to update something. It feels like a lot, right? Frankly, it is. But the more we rely on technology, the more critical it is to be aware of and understand the risks. Taking a few simple steps to become aware and protect ourselves can go a long way. The goal here isn’t to instill fear or paranoia but to build confidence through awareness and proactive practices, with the understanding and recognition that these technologies are here to stay. This month we will be going over a Media Destruction Event Pilot, AI use (diving into DeepSeek), and more on operating system patching importance.
Upcoming Media Destruction Event
Date: August 7 and 8
Time: 9 a.m.- 3 p.m.
Location: J103
We are making it easy for you to destroy your older, unwanted media!
https://events.uchicago.edu/event/250034-secure-media-destruction-event
Accepted items at this event:
- Hard Drives (internal and external)
- USB Drives
- CDROMs
- Floppies
- Other Storage Media
We will be giving away (yes free) approved 32GB encrypted USB Drives* and transfer services will be available during the event. Our support personnel will be on-site to help transfer data from older USB devices to the new encrypted drives before destruction — a perfect way to upgrade safely and securely. Note: We will NOT be accepting paper, computing systems, monitors, keyboard and mice at this event.
* (limited to one per person to the first 50 employee participants.)
AI and Adoption
As any parent can attest, children are full of potential and they’re also expensive. The same can be said for tech services and software. Much like raising a child, adopting new technology comes with hidden costs, such as ongoing support (maintenance), unexpected issues (bugs), and the responsibility to ensure safety and wellness (security). Whenever new software or innovative services emerge, there's a natural urge, especially in fast-paced environments, to explore and adopt them quickly to keep pace with innovation.
The promise of increased efficiency, smarter automation, or simply keeping up with trends often drives this impulse. While curiosity and innovation are essential, it’s equally important to pause and evaluate potential risks, especially when sensitive data, compliance obligations, or system integrity is involved.With any new tech it’s essential to slow down and fully understand what we’re bringing into our environment before committing to it… that being said…
One recent example worth highlighting is DeepSeek, a promising AI assistant that gained attention in January 2025 for its capabilities, which also raised some serious concerns:
- Multiple independent audits and news investigations have found and reported bias in favor of the Chinese government or CPC (Communist Party of China). Bias includes:
- Deliberate censorship on sensitive topics and content filtering aligned with CPC government policies. Politically sensitive queries trigger consistent censorship or canned pro-government responses. For example: When asked about Tiananmen Square, Taiwan, or Xi Jinping, DeepSeek often initiates an answer then self-censors—erasing content mid-response or issuing stock refusal messages.
- https://www.wired.com/story/deepseek-censorship/
- https://www.cnn.com/2025/01/29/china/deepseek-ai-china-censorship-moderation-intl-hnk/
Now you might be thinking that other AI models also restrict content. Yes, they do, but the intent, scope and transparency behind those restrictions are fundamentally different from what DeepSeek is doing. ChatGPT’s content filtering, for example, is driven by safety, privacy, and legal compliance, On the other hand, per online sources, DeepSeek is ideologically controlled by the state and is politically motivated and geopolitically biased. It is not driven by concerns about public safety. Embedded pro-CPC messaging/propaganda and anti-US sentiment are in 85% of its responses (twice as much as other AI models).
- https://arxiv.org/abs/2506.01814
- https://www.taipeitimes.com/News/taiwan/archives/2025/04/17/2003835353
- A detailed cross‑lingual bias study found:
- In Simplified Chinese (zh‑CN) queries, about 5 % of responses showed anti‑U.S. sentiment.
- In Traditional Chinese (zh‑TW), that dropped to 2.42 %.
- In English, it was nearly negligible, around 0.42% narrative framing
2. In Simplified Chinese, DeepSeek tends to:
- Describe the U.S. as an over-expansionist hegemon.
- Criticize American domestic politics—e.g., depicting U.S. elections as chaotic or corrupt.
- Frame U.S. foreign actions as imposing and self-serving.
You might be thinking, well, the Chinese language is linguistically rich in nuance and context dependent, so it is bound to have more meaning. Studies have accounted for that, so the comparison isn’t just a literal translation but contextual and culturally appropriate comparisons.
- DeepSeek collects, transmits, and stores user data on servers located in China, where it may be used and retained indefinitely.
- User data is retained without any clear mechanism for removal or user consent, raising serious privacy concerns and stripping individuals of control over their digital footprint.
- Uploaded data may be accessed by CPC government authorities without due process, notification, or user permission, enabling potential data exploitation.
- There are credible concerns about targeted surveillance, particularly given the tool's alignment with state-controlled infrastructure and lack of independent oversight.
It was brought to my attention that the graphics below may appear offensive to some. Please note that the graphics are intended to highlight concerns raised in recent independent audits about data security, surveillance risks, and geopolitical implications when using AI tools developed under differing regulatory systems. It is not meant to target any one country or group, it is not meant to make political statements or cause offense but to encourage critical thinking about where our data goes, how it is used, and why alignment with institutional values and privacy standards matters.
The types of data DeepSeek automatically captures include:
- Chat logs
- Uploaded files
- Text input
- Audio input
- Keystroke patterns and typing rhythms (e.g., keystroke tracking from the mobile app)
- Device information (e.g., device model, operating system, IP address, system language)
You might be thinking… But other AI models also gather information like this to train, right?
While it's true that some AI platforms also collect certain categories of user data, the most critical concerns with DeepSeek’s data gathering stems from its data governance model and operational jurisdiction. Operating under CPC law, DeepSeek is subject to legal frameworks that prioritize state access over individual privacy. Unlike the United States where laws like HIPAA, FERPA, and various state-level privacy statutes enforce individual consent, data minimization, and transparency, CPC’s legal system grants the government broad authority to access user data with little or no accountability or recourse.
Under CPC’s Cybersecurity Law and related regulations, like the Data Security Law (DSL) and Personal Information Protection Law (PIPL), organizations are required to store data domestically and collaborate with the CPC government authorities upon request and often without user notification. This means data processed or transmitted through platforms like DeepSeek can be subject to government access and surveillance in ways that are not compatible with U.S. privacy expectations or regulatory requirements. By contrast, in the U.S., users and organizations generally retain more control over how data are collected, used, shared, and stored. U.S. privacy protections tend to prioritize individual rights, consent, and legal recourse,while CPC regulations prioritize state oversight and national security interests. This divergence creates a significant compliance and ethical gap, particularly for institutions subject to U.S. laws that govern protected health information (PHI), personally identifiable information (PII), or proprietary research data.
From logging every keystroke, to routing sensitive information through CPC servers under foreign jurisdiction, and retaining data, DeepSeek serves as a reminder that not all innovation is harmless. What looked like a cutting-edge solution can come bundled with risks, censorship mechanisms, the potential for data manipulation and privacy red flags. It’s a powerful case study in why we can’t afford to let enthusiasm override evaluation especially when trust, compliance, and security are on the line.
Approved AI Tools/Models
Alignment with institutional integrity is essential when selecting and using an AI tool/model. So what are the supported AI tools/models? A resource outlining approved use cases and vetted AI tools is available here:
https://genai.uchicago.edu/generative-ai-tools
Please consult this guide on AI tools use before using any AI tool. Note that, although at the time of this writing DeepSeek is not listed, it should be understood that this is one AI tool that should be avoided as it poses both data security and institutional integrity risks unless the intent is a study directly related to the tool (or similar), in which case a request to security@bsd.uchicago.edu should be submitted for review. It should also be noted that the UChicagoMedicine AI tool is also not listed here but that may be because it is still in beta. More information about the UChicago Medicine AI tool can be found here: https://chatucm.uchicagomedicine.org/assets/ChatUCM-BEb4hiCz.pdf
System Updates and Reboots
Every month, Microsoft, Apple, and other operating system and application vendors release patches and bug fixes, some of which address critical vulnerabilities that attackers can exploit. Installing updates alone isn’t enough to secure our systems. These patches and bug fixes often require a system reboot to apply and take effect. Without a reboot, the vulnerabilities may remain active, leaving our systems exposed. Regular patching and timely rebooting are essential to protect against ransomware, data breaches, and unauthorized access, which are among the most serious threats to our institution’s security. Managed systems are patched monthly; however, reboots can be delayed by user interaction. Please remember to reboot your system regularly, especially after updates, to ensure your device is fully protected and compliant with our security standards.
Remember that Cybersecurity doesn’t have to be overwhelming, just intentional. Take small steps consistently, stay informed, and remember you're not in this alone. If you have questions or want help evaluating risks or tools, the BSD Information Security Office is here for you. We hope these insights help! If you have any topics you would like us to write about in our newsletter, please feel free to drop us a line and let us know by e-mailing security@bsd.uchicago.edu.