Advertisement · 728 × 90
#
Hashtag
#ChatbotSecurity
Advertisement · 728 × 90
Preview
Sears Chatbot Exposed Customer Conversations and Contact Details to Public Web Sears' AI chatbot exposed customer conversations and contact details publicly, creating phishing and fraud risks. Here's what customers need to know.

Sears Chatbot Exposed Customer Conversations and Contact Details to Public Web

#DataBreach #SearsNews #ChatbotSecurity #AusNews

thedailyperspective.org/article/2026-03-17-sears...

0 0 0 0
Preview
Eurostar’s AI Chatbot Exposed to Security Flaws, Experts Warn of Growing Cyber Risks   Eurostar’s newly launched AI-driven customer support chatbot has come under scrutiny after cybersecurity specialists identified several vulnerabilities that could have exposed the system to serious risks.  Security researchers from Pen Test Partners found that the chatbot only validated the latest message in a conversation, leaving earlier messages open to manipulation. By altering these older messages, attackers could potentially insert malicious prompts designed to extract system details or, in certain scenarios, attempt to access sensitive information. At the time the flaws were uncovered, the risks were limited because Eurostar had not integrated its customer data systems with the chatbot. As a result, there was no immediate threat of customer data being leaked. The researchers also highlighted additional security gaps, including weak verification of conversation and message IDs, as well as an HTML injection vulnerability that could allow JavaScript to run directly within the chat interface.  Pen Test Partners stated they were likely the first to identify these issues, clarifying: “No attempt was made to access other users’ conversations or personal data”. They cautioned, however, that “the same design weaknesses could become far more serious as chatbot functionality expands”. Eurostar reiterated that customer information remained secure, telling City AM: “The chatbot did not have access to other systems and more importantly no sensitive customer data was at risk. All data is protected by a customer login.” The incident highlights a broader challenge facing organizations worldwide. As companies rapidly adopt AI-powered tools, expanding cloud-based systems can unintentionally increase attack surfaces, making robust security measures more critical than ever.

Eurostar’s AI Chatbot Exposed to Security Flaws, Experts Warn of Growing Cyber Risks #AIcustomersupportsecurity #chatbotsecurity #EurostarAIchatbot

0 0 0 0
Preview
Meta.ai Privacy Lapse Exposes User Chats in Public Feed  Meta’s new AI-driven chatbot platform, Meta.ai, launched recently with much fanfare, offering features like text and voice chats, image generation, and video restyling. Designed to rival platforms like ChatGPT, the app also includes a Discover feed, a space intended to showcase public content generated by users. However, what Meta failed to communicate effectively was that many users were unintentionally sharing their private conversations in this feed—sometimes with extremely sensitive content attached.  In May, journalists flagged the issue when they discovered public chats revealing deeply personal user concerns—ranging from financial issues and health anxieties to legal troubles. These weren’t obscure posts either; they appeared in a publicly accessible area of the app, often containing identifying information. Conversations included users seeking help with medical diagnoses, children talking about personal experiences, and even incarcerated individuals discussing legal strategies—none of whom appeared to realize their data was visible to others.  Despite some recent tweaks to the app’s sharing settings, disturbing content still appears on the Discover feed. Users unknowingly uploaded images and video clips, sometimes including faces, alongside alarming or bizarre prompts. One especially troubling instance featured a photo of a child at school, accompanied by a prompt instructing the AI to “make him cry.” Such posts reflect not only poor design choices but also raise ethical questions about the purpose and moderation of the Discover feed itself.  The issue evokes memories of other infamous data exposure incidents, such as AOL’s release of anonymized user searches in 2006, which provided unsettling insight into private thoughts and behaviors. While social media platforms are inherently public, users generally view AI chat interactions as private, akin to using a search engine. Meta.ai blurred that boundary—perhaps unintentionally, but with serious consequences. Many users turned to Meta.ai seeking support, companionship, or simple productivity help. Some asked for help with job listings or obituary writing, while others vented emotional distress or sought comfort during panic attacks.  In some cases, users left chats expressing gratitude—believing the bot had helped. But a growing number of conversations end in frustration or embarrassment when users realize the bot cannot deliver on its promises or that their content was shared publicly. These incidents highlight a disconnect between how users engage with AI tools and how companies design them. Meta’s ambition to merge AI capabilities with social interaction seems to have ignored the emotional and psychological expectations users bring to private-sounding features.  For those using Meta.ai as a digital confidant, the lack of clarity around privacy settings has turned an experiment in convenience into a public misstep. As AI systems become more integrated into daily life, companies must rethink how they handle user data—especially when users assume privacy. Meta.ai’s rocky launch serves as a cautionary tale about transparency, trust, and design in the age of generative AI.

Meta.ai Privacy Lapse Exposes User Chats in Public Feed #AIChatbots #AItechnology #chatbotsecurity

0 0 0 0

Task 1 finished #adventofcyber #adventofcyber2023 #tryhackme #chatbot #chatbotsecurity #chatgpt

2 0 0 0