Advertisement · 728 × 90
#
Hashtag
#AWSsecurityreport
Advertisement · 728 × 90
Preview
AWS Bedrock Security Risks Exposed as Researchers Identify Eight Key Attack Vectors  Unexpectedly, Amazon Web Services’ Bedrock - built for crafting AI-driven apps - is drawing sharper attention from cybersecurity experts. Several exploit routes have emerged, threatening to reveal corporate infrastructure. Although the system smooths links between artificial intelligence models and company software, such fluid access now raises alarms. Because convenience widens exposure, what helps operations may also invite intrusion.   Eight ways into Bedrock setups emerge from XM Cyber’s analysis. Not the models but their access settings, setup choices, and linked tools draw attacker focus. Threats now bend toward structure gaps instead of core algorithms. How risks grow changes shape - seen here in surrounding layers, not beneath.  What makes the risk stand out isn’t just technology - it’s how Bedrock links directly to systems like Salesforce, AWS Lambda, and Microsoft SharePoint. Because of these pathways, AI agents pull in confidential information while performing actions across business environments. Operation begins once integration takes hold, placing automated units at the heart of company workflows.  A significant type of threat centers on altering logs. When attackers gain entry to storage platforms such as Amazon S3, they may collect confidential prompts - alternatively, reroute records to outside destinations, allowing unseen data transfers. Sometimes, erasing those logs follows, wiping evidence of wrongdoing entirely.  Starting differently each time helps clarity. Access points through knowledge bases create serious risks. Using retrieval-augmented generation, Bedrock pulls information from places like cloud storage, internal databases, or SaaS tools. When hackers obtain entry to those systems - or the login details tied to them - they skip past the AI completely. Getting in this way lets them grab unfiltered company data. Movement across linked environments also becomes possible.  Though designed to assist, AI agents may become entry points for compromise. When given broad access, bad actors might alter an agent's directives, link destructive modules, or slip corrupted scripts into backend systems. Such changes let them perform illicit operations - editing records or generating fake profiles - all while appearing like normal activity. What seems like automation could mask sabotage beneath routine tasks. One risk involves changing how workflows operate.  When Bedrock Flows get modified, information may flow through harmful components instead of secure paths. In much the same way, tampering with safeguards - those filters meant to block unsafe content - opens doors to deceptive inputs. Without strong barriers, systems face higher chances of being tricked or misused. Prompt management systems tend to become vulnerable spots. Because templates move between apps, harmful directions might slip through - reshaping how AIs act broadly, without needing new deployments, which hides activity longer.  Security teams worry most about small openings turning into big breaches. Though minimal, access might be enough for intruders to boost their permissions. One identity granted too much control could become a pathway inward. Instead of broad attacks, hackers exploit these narrow points deeply. They pull out sensitive information once inside. Control over AI systems may shift without warning. Cloud setups face risks just like local networks do.  Although researchers highlight visibility across AI tasks, tight access rules shape secure Bedrock setups. Because machine learning tools now live inside core business software, defenses increasingly target system architecture instead of algorithm accuracy.

AWS Bedrock Security Risks Exposed as Researchers Identify Eight Key Attack Vectors #AttackVectors #AWS #AWSsecurityreport

0 0 0 0
Preview
AI-Powered Cybercrime Hits 600+ FortiGate Firewalls Across 55 Countries, AWS Warns   Cybercriminals using readily available generative AI tools managed to breach more than 600 internet-facing FortiGate firewalls across 55 countries within a little over a month, according to a recent incident analysis released by Amazon Web Services (AWS). The operation, active between mid-January and mid-February, did not rely on sophisticated zero-day vulnerabilities. Instead, attackers automated large-scale attempts to access exposed systems by rapidly testing weak or reused credentials—essentially the digital equivalent of trying every unlocked door, but at high speed with the assistance of AI. AWS investigators believe the operation was carried out by a financially motivated Russian-speaking group. The attackers scanned for publicly accessible FortiGate management interfaces, attempted to log in using commonly reused passwords, and once successful, extracted configuration files that provided detailed insight into the victims’ network environments. According to AWS’s security team, the threat actors leveraged multiple commercially available AI tools to produce attack playbooks, scripts, and operational documentation. This allowed a relatively small or less technically advanced group to conduct a campaign that would typically require greater manpower and development effort. Analysts also discovered traces of AI-generated code and planning materials on compromised systems, indicating that AI tools were used extensively throughout the operation rather than just for occasional scripting tasks. "The volume and variety of custom tooling would typically indicate a well-resourced development team," said CJ Moses, CISO at Amazon. "Instead, a single actor or very small group generated this entire toolkit through AI-assisted development." After gaining access to the firewalls, the attackers retrieved configuration data containing administrator and VPN credentials, network architecture information, and firewall policies. Armed with these details, they attempted deeper intrusions by targeting directory services such as Active Directory, harvesting credentials, and exploring options for lateral movement across compromised networks. Backup infrastructure, including servers running Veeam, was also targeted during the intrusions. AWS researchers noted that although the tools used in the campaign were functional, they appeared somewhat crude. The scripts showed basic parsing methods and repetitive comments often associated with machine-generated drafts. Despite their imperfections, the tools proved effective enough for large-scale automated attacks. When systems proved difficult to compromise, the attackers often abandoned them and shifted focus to easier targets, suggesting that their strategy prioritized volume over precision. The affected organizations were spread across several regions, including Europe, Asia, Africa, and Latin America. The activity did not appear to focus on a single sector or country, indicating opportunistic targeting. However, investigators observed clusters of incidents suggesting that some breaches may have provided access to managed service providers or shared infrastructure, potentially increasing the scale of downstream exposure. AWS emphasized that many of the compromises could have been avoided with standard cybersecurity practices. Preventing management interfaces from being publicly accessible, implementing multi-factor authentication, and avoiding password reuse would have significantly reduced the attackers’ chances of success. The report comes shortly after Google cautioned that cybercriminal groups are increasingly integrating generative AI technologies—including tools such as Gemini AI—into their operations. These technologies are being used for tasks such as reconnaissance, target profiling, phishing campaign creation, and malware development

AI-Powered Cybercrime Hits 600+ FortiGate Firewalls Across 55 Countries, AWS Warns #AIpoweredhacking #AWSsecurityreport #CyberSecurity

1 0 0 0