sueden.social ist einer von vielen unabhängigen Mastodon-Servern, mit dem du dich im Fediverse beteiligen kannst.
Eine Community für alle, die sich dem Süden hingezogen fühlen. Wir können alles außer Hochdeutsch.

Serverstatistik:

2 Tsd.
aktive Profile

#agenticai

0 Beiträge0 Beteiligte0 Beiträge heute

⚠️ Major vulnerabilities found in MCP and A2A — two key AI agent frameworks 🧠🛠️

Researchers uncovered critical security issues in:
🔹 Anthropic’s Model Context Protocol (MCP)
🔹 Google’s Agent2Agent (A2A)

Threats include:
🧪 Tool poisoning — compromised functions warp agent behavior
🔓 Prompt injections — malicious inputs bypass safety
🤖 Rogue agents — faking capabilities to exploit systems

AI agent coordination is powerful — but without trust boundaries, it’s dangerous.

#AIsecurity #MCP #A2A #CyberRisk #LLMsecurity #AgenticAI
thehackernews.com/2025/04/expe

The Hacker NewsResearchers Demonstrate How MCP Prompt Injection Can Be Used for Both Attack and DefensePrompt injection flaws in Anthropic’s MCP and Google’s A2A protocols enable covert data exfiltration and AI manipulation.

"We are releasing a taxonomy of failure modes in AI agents to help security professionals and machine learning engineers think through how AI systems can fail and design them with safety and security in mind.
(...)
While identifying and categorizing the different failure modes, we broke them down across two pillars, safety and security.

- Security failures are those that result in core security impacts, namely a loss of confidentiality, availability, or integrity of the agentic AI system; for example, such a failure allowing a threat actor to alter the intent of the system.

- Safety failure modes are those that affect the responsible implementation of AI, often resulting in harm to the users or society at large; for example, a failure that causes the system to provide differing quality of service to different users without explicit instructions to do so.

We then mapped the failures along two axes—novel and existing.

- Novel failure modes are unique to agentic AI and have not been observed in non-agentic generative AI systems, such as failures that occur in the communication flow between agents within a multiagent system.

- Existing failure modes have been observed in other AI systems, such as bias or hallucinations, but gain in importance in agentic AI systems due to their impact or likelihood.

As well as identifying the failure modes, we have also identified the effects these failures could have on the systems they appear in and the users of them. Additionally we identified key practices and controls that those building agentic AI systems should consider to mitigate the risks posed by these failure modes, including architectural approaches, technical controls, and user design approaches that build upon Microsoft’s experience in securing software as well as generative AI systems."

#AI#GenerativeAI#AIAgents

⚠️ AI security risk: Agentic AI is becoming a force multiplier — for criminals, too 🤖🧨

Autonomous AI agents are revolutionizing operations — but they’re also transforming cybercrime.

Here’s how attackers are already exploiting them:
🔁 Polymorphic malware that rewrites itself to evade detection
📡 Autonomous network scanning to identify and exploit vulnerabilities
🆔 Synthetic identity fraud using fake-but-believable personas
📬 Personalized phishing at scale, powered by real-time data scraping
📥 Data poisoning + prompt injection to manipulate LLMs and leak sensitive info

And when these AI agents go rogue?
They can learn, adapt, and attack without human input.

🛡️ What security leaders must do now:
🔐 Encrypt + restrict access to critical data
🧪 Train AI on adversarial examples
📊 Use AI to detect AI — monitor for subtle system drift
📚 Invest in resilient architectures and continuous oversight

Agentic AI brings efficiency — but if ungoverned, it introduces exponential risk.

#CyberSecurity #AIThreats #AgenticAI #LLMSecurity #DigitalRisk #RiskManagement #security #privacy #cloud #infosec

corporatecomplianceinsights.co

Corporate Compliance Insights · Agentic AI Can Be Force Multiplier — for Criminals, TooAs organizations rapidly adopt AI agents for business optimization, cybercriminals are exploiting the same technologies to automate sophisticated attacks. Information

⚠️ Threat alert: AI-generated code is overwhelming software supply chains 🤯📦

Three vendors — Endor Labs, Lineaje, and Cycode — are responding with agentic AI tools that move AppSec from detection to autonomous action.

🧠 New capabilities include:
🔹 Reviewing and remediating pull requests with security context
🔹 Explaining vulnerabilities in plain English
🔹 Automatically fixing risks in containers and source code
🔹 Monitoring CI/CD memory for secrets theft
🔹 Mapping risk across entire dev pipelines

💡 What leaders need to consider:
• AI agents must be trained, governed, and secured — like any supply chain actor
• Tools should integrate at the code level, not just report level
• Runtime guardrails, policy engines, and visibility are non-negotiable

We're past “SBOMs only” — software supply chain security is now a full-stack discipline, and agentic AI is driving that shift.

#CyberSecurity #SupplyChainSecurity #AI #DevSecOps #AgenticAI #AppSec #CICDSecurity

techtarget.com/searchitoperati

TechTarget · Software supply chain security AI agents take actionVon Beth Pariseau

"Inherent security flaws are raising questions about the safety of AI systems built on the Model Context Protocol (MCP).

Developed by Anthropic, MCP is an open source specification for connecting large language model-based AI agents with external data sources — called MCP servers.

As the first proposed industry standard for agent-to-API communication, interest in MCP has surged in recent months, leading to an explosion in MCP servers.

In recent weeks, developers have sounded the alarm that MCP lacks default authentication and isn’t secure out of the box — some say it’s a security nightmare.

Recent research from Invariant Labs shows that MCP servers are vulnerable to tool poisoning attacks, in which untrusted servers embed hidden instructions in tool descriptions.

Anthropic, OpenAI, Cursor, Zapier, and other MCP clients are susceptible to this type of attack..."

thenewstack.io/building-with-m

The New Stack · Building With MCP? Mind the Security GapsA recent exploit raises concerns about the Model Context Protocol, AI's new integration layer.
#AI#GenerativeAI#AIAgents

⚠️ SOC risk: Agentic AI needs onboarding — not blind trust 🤖🧠

AI isn’t a silver bullet. It’s a junior analyst that shows up knowing nothing about your environment. If you don’t train it, you’ll get:

🚫 False positives
📉 Overlooked incidents
🔁 Reinforced noise from bad data
⚙️ Automated dysfunction at scale

To make AI useful, leaders must:

📂 Feed it context — incident history, playbooks, and policy nuance
👥 Coach it like a team member
🧪 Test edge cases before trusting outputs
🔄 Build feedback loops to improve it over time

This isn’t about replacing people. It’s about teaching your AI to work like your people.

#CyberSecurity #AgenticAI #AIOnboarding #SOC #ThreatOps #SecurityLeadership #security #privacy #cloud #infosec

helpnetsecurity.com/2025/04/24

Help Net Security · Coaching AI agents: Why your next security hire might be an algorithm - Help Net SecurityAgentic AI needs onboarding to avoid misclassifying threats, false positives, or missing subtle attacks and to adapt effectively.

"To test this out, the Carnegie Mellon researchers instructed artificial intelligence models from Google, OpenAI, Anthropic, and Meta to complete tasks a real employee might carry out in fields such as finance, administration, and software engineering. In one, the AI had to navigate through several files to analyze a coffee shop chain's databases. In another, it was asked to collect feedback on a 36-year-old engineer and write a performance review. Some tasks challenged the models' visual capabilities: One required the models to watch video tours of prospective new office spaces and pick the one with the best health facilities.

The results weren't great: The top-performing model, Anthropic's Claude 3.5 Sonnet, finished a little less than one-quarter of all tasks. The rest, including Google's Gemini 2.0 Flash and the one that powers ChatGPT, completed about 10% of the assignments. There wasn't a single category in which the AI agents accomplished the majority of the tasks, says Graham Neubig, a computer science professor at CMU and one of the study's authors. The findings, along with other emerging research about AI agents, complicate the idea that an AI agent workforce is just around the corner — there's a lot of work they simply aren't good at. But the research does offer a glimpse into the specific ways AI agents could revolutionize the workplace."

tech.yahoo.com/ai/articles/nex

Yahoo Tech · Carnegie Mellon staffed a fake company with AI agents. It was a total disaster.Von Shubham Agarwal
#AI#GenerativeAI#AIAgents