-
Declassified Election-Related Emails Portray FBI as ‘Broken Institution’ - 4 mins ago
-
Gavin Adcock claims Beyoncé’s album shouldn’t be labeled country music - 14 mins ago
-
72-year-old man accused of killing girlfriend who disappeared nearly 42 years ago - 16 mins ago
-
Why NFL Offenses Are Going For It on Fourth Down More Than Ever - 21 mins ago
-
Court approves sale of 23andMe to nonprofit led by co-founder Anne Wojcicki - 39 mins ago
-
Trans Swimmer Lia Thomas Stripped of Swim Titles in UPenn Trump Deal - 43 mins ago
-
Partial verdict reached in Diddy trial on sex trafficking and prostitution charges - 54 mins ago
-
Michigan dad accused of beating, choking son until he passed out because he ‘fell asleep on the couch’ - 59 mins ago
-
Urban Meyer on Big Ten rise & James Franklin title shot with Penn State | FULL INTERVIEW | The Herd - about 1 hour ago
-
He escaped a ‘panic house’ in Mexico, where young drug users end up as hit men — or dead - about 1 hour ago
Meta AI’s new chatbot raises privacy alarms
NEWYou can now listen to Fox News articles!
Meta’s new AI chatbot is getting personal, and it might be sharing more than you realize. A recent app update introduced a “Discover” feed that makes user-submitted chats public, complete with prompts and AI responses. Some of those chats include everything from legal troubles to medical conditions, often with names and profile photos still attached. The result is a privacy nightmare in plain sight.
If you’ve ever typed something sensitive into Meta AI, now is the time to check your settings and find out just how much of your data could be exposed.
Sign up for my FREE CyberGuy Report
Get my best tech tips, urgent security alerts, and exclusive deals delivered straight to your inbox. Plus, you’ll get instant access to my Ultimate Scam Survival Guide – free when you join my CYBERGUY.COM/NEWSLETTER.
Meta’s new AI chatbot is getting personal, and it might be sharing more than you realize. (Kurt “CyberGuy” Knutsson)
What is Meta AI, and what does the “Discover” tab do?
Meta’s AI app, launched in April 2025, is designed to be both a chatbot and a social platform. Users can chat casually or deep dive into personal topics, from relationship questions to financial concerns or health issues.
What sets Meta AI apart from other chatbots is the “Discover” tab, a public feed that displays shared conversations. It was meant to encourage community and creativity, letting users showcase interesting prompts and responses. Unfortunately, many didn’t realize their conversations could be made public with just one tap, and the interface often fails to make the public/private distinction clear.
The feature positions Meta AI as a kind of AI-powered social network, blending search, conversation, and status updates. But what sounds innovative on paper has opened the door to major privacy slip-ups.
Why Meta AI’s Discover tab is a privacy risk
Privacy experts are sounding the alarm over Meta’s Discover tab, calling it a serious breach of user trust. The feed surfaces chats containing legal dilemmas, therapy discussions, and deeply personal confessions, often linked to real accounts. In some cases, names and profile photos are visible. Although Meta says only shared chats appear, the interface makes it easy to hit “share” without realizing it means public exposure. Many assume the button saves the conversation privately. Worse, logging in with a public Instagram account can make shared AI activity publicly accessible by default, increasing the risk of identification.
Some posts reveal sensitive health or legal issues, financial troubles, or relationship conflicts. Others include contact details or even audio clips. A few contain pleas like “keep this private,” written by users who didn’t realize their messages would be broadcast. These aren’t isolated incidents, and as more people use AI for personal support, the stakes will only get higher.

A recent app update introduced a “Discover” feed that makes user-submitted chats public, complete with prompts and AI responses. (Kurt “CyberGuy” Knutsson)
How to change your privacy settings in the Meta AI app
If you’re using Meta AI, it’s important to check your privacy settings and manage your prompt history to avoid accidentally sharing something sensitive. To prevent accidentally sharing sensitive prompts and ensure your future prompts stay private:
On a phone: (iPhone or Android)
- Open the Meta AI app on your iPhone.
- Tap your profile photo.
- Select Data & privacy from the menu.
- Tap Manage your information or similarly titled option.
- Enable the setting that makes all public prompts visible only to you. This hides any past prompts from being viewed publicly
On the website (desktop):
- Open your browser and go to meta.ai
- Sign in with your Facebook or Instagram account, if prompted.
- Click your profile photo or name in the top-right corner.
- Go to Settings, then choose Data & privacy.
- Under Manage your information, adjust your prompt visibility by selecting “Make all prompts visible only to you.”
- To manage individual entries, navigate to your History and click the three-dot icon next to any prompt to either delete it or limit its visibility.
How to review or update the privacy of posted prompts
Fortunately, you can change the visibility of prompts you’ve already posted, delete them entirely, and update your settings to keep future prompts private.
On a phone: (iPhone or Android)
- Open the Meta AI app
- Tap the History icon at the bottom (The icon typically looks like a clock or a stack of messages)
- Select the prompt you want to update
- Tap the three dots in the top right corner
- Choose “Make visible to only you” or “Delete”
On the website (desktop):
- Go to Meta.ai
- Click on your prompt in the left sidebar
- Click the three dots in the upper right corner
- Select “Make visible to only you” or “Delete”
If other users replied to your prompt before you made it private, those replies will remain attached but won’t be visible unless you reshare the prompt. Once reshared, the replies will also become visible again.
How to bulk update or delete your prompts
On both the app and the website:
- Tap or click your profile picture (top right on app, bottom left on desktop)
- Go to Settings > Data & Privacy > Manage Your Information
- Tap or click “Make all prompts visible to only you”, then select Apply to all
- Or choose “Delete all prompts”, then tap or click Delete all
- If you’ve used voice chat with Meta AI, deleting a prompt will also delete the associated voice recording. However, deleted prompts may still appear in your history until you refresh the app or website.
- Even casual users should take a moment to review their settings and chat history to make sure personal details aren’t being shared without their knowledge.
Are AI chat platforms really private?
This issue isn’t unique to Meta. Most AI chat tools, including ChatGPT, Claude, and Google Gemini, store your conversations by default and may use them to improve performance, train future models, or develop new features. What many users don’t realize is that their inputs can be reviewed by human moderators, flagged for analysis, or saved in training logs.
Even if a platform says your chats are “private,” that usually just means they aren’t visible to the public. It doesn’t mean your data is encrypted, anonymous, or protected from internal access. In many cases, companies retain the right to use your conversations for product development unless you specifically opt out, and finding that opt-out isn’t always straightforward.
If you’re signed in with a personal account that includes your real name, email address, or social media links, your activity may be easier to connect to your identity than you think. Combine that with questions about health, finances, or relationships, and you’ve essentially created a detailed digital profile without meaning to.
Some platforms now offer temporary chat modes or incognito settings, but these features are usually off by default. Unless you manually enable them, your data is likely being stored and possibly reviewed.
The takeaway: AI chat platforms are not private by default. You need to actively manage your settings, be mindful of what you share, and stay informed about how your data is being handled behind the scenes.

Meta’s AI app, launched in April 2025, is designed to be both a chatbot and a social platform. (Kurt “CyberGuy” Knutsson)
How to protect your privacy when using AI chatbots
AI tools can be incredibly helpful, but without the right precautions, they can also open you up to privacy risks. Whether you’re using Meta AI, ChatGPT, or any other chatbot, here are some smart, proactive ways to protect yourself:
1) Use aliases and avoid personal identifiers: Don’t use your full name, birthday, address, or any details that could identify you. Even first names combined with other context can be risky.
2) Never share sensitive information: Avoid discussing medical diagnoses, legal matters, bank account info, or anything you wouldn’t want on the front page of a search engine.
3) Clear your chat history regularly: If you’ve already shared sensitive info, go back and delete it. Many AI apps let you clear chat history through Settings or your account dashboard.
4) Adjust privacy settings often: App updates can sometimes reset your preferences or introduce new default options. Even small changes to the interface can affect what’s shared and how. It’s a good idea to check your settings every few weeks to make sure your data is still protected.
5) Use an identity theft protection service: Scammers actively look for exposed data, especially after a privacy slip. Identity Theft companies can monitor personal information like your Social Security Number (SSN), phone number, and email address and alert you if it is being sold on the dark web or being used to open an account. They can also assist you in freezing your bank and credit card accounts to prevent further unauthorized use by criminals. Visit Cyberguy.com/IdentityTheft for tips and recommendations.
6) Use a VPN for extra privacy: A reliable VPN hides your IP address and location, making it harder for apps, websites, or bad actors to track your online activity. It also adds protection on public Wi-Fi, shielding your device from hackers who might try to snoop on your connection. For best VPN software, see my expert review of the best VPNs for browsing the web privately on your Windows, Mac, Android & iOS devices at Cyberguy.com/VPN.
7) Don’t link AI apps to your real social accounts: If possible, create a separate email address or dummy account for experimenting with AI tools. Keep your main profiles disconnected. To create a quick email alias you can use to keep your main accounts protected visit Cyberguy.com/Mail.
Kurt’s key takeaways
Meta’s decision to turn chatbot prompts into social content has blurred the line between private and public in a way that catches many users off guard. Even if you think your chats are safe, a missed setting or default option can expose more than you intended. Before typing anything sensitive into Meta AI or any chatbot, pause. Check your privacy settings, review your chat history, and think carefully about what you’re sharing. A few quick steps now can save you from bigger privacy headaches later.
With so much sensitive data potentially at risk, do you think Meta is doing enough to protect your privacy, or is it time for stricter guardrails on AI platforms? Let us know by writing to us at Cyberguy.com/Contact.
Sign up for my FREE CyberGuy Report
Get my best tech tips, urgent security alerts, and exclusive deals delivered straight to your inbox. Plus, you’ll get instant access to my Ultimate Scam Survival Guide – free when you join my CYBERGUY.COM/NEWSLETTER.
Copyright 2025 CyberGuy.com. All rights reserved.
Source link