48% of Security Professionals Believe AI Is Risky

48% of Security Professionals Believe Ai is Risky

48% of Security Professionals Believe AI Is Risky

Home ยป News ยป 48% of Security Professionals Believe AI Is Risky
Table of Contents

A contemporary survey of 500 safety pros via HackerOne, a safety analysis platform, discovered that 48% consider AI poses essentially the most vital safety possibility to their group. Among their largest issues associated with AI come with:

  • Leaked coaching information (35%).
  • Unauthorized utilization (33%).
  • The hacking of AI fashions via outsiders (32%).

These fears spotlight the pressing want for corporations to think again their AI safety methods prior to vulnerabilities grow to be actual threats.

AI has a tendency to generate false positives for safety groups

While the overall Hacker Powered Security Report gainedโ€™t be to be had till later q4, additional analysis from a HackerOne-sponsored SANS Institute file published that 58% of safety pros consider that safety groups and risk actors may to find themselves in an โ€œarms raceโ€ to leverage generative AI ways and strategies of their paintings.

Security pros within the SANS survey mentioned they have got discovered luck the usage of AI to automate tedious duties (71%). However, the similar individuals stated that risk actors may exploit AI to make their operations extra environment friendly. In explicit, respondents โ€œwere most concerned with AI-powered phishing campaigns (79%) and automated vulnerability exploitation (74%).โ€

SEE: Security leaders are getting pissed off with AI-generated code.

โ€œSecurity teams must find the best applications for AI to keep up with adversaries while also considering its existing limitations โ€” or risk creating more work for themselves,โ€ Matt Bromiley, an analyst on the SANS Institute, mentioned in a press free up.

The answer? AI implementations must go through an exterior assessment. More than two-thirds of the ones surveyed (68%) selected โ€œexternal reviewโ€ as among the finest approach to establish AI security and safety problems.

โ€œTeams are now more realistic about AIโ€™s current limitationsโ€ than they had been closing 12 months, mentioned HackerOne Senior Solutions Architect Dane Sherrets in an e-mail to roosho. โ€œHumans bring a lot of important context to both defensive and offensive security that AI canโ€™t replicate quite yet. Problems like hallucinations have also made teams hesitant to deploy the technology in critical systems. However, AI is still great for increasing productivity and performing tasks that donโ€™t require deep context.โ€

Further findings from the SANS 2024 AI Survey, launched this month, come with:

  • 38% plan to undertake AI inside of their safety technique sooner or later.
  • 38.6% of respondents mentioned they have got confronted shortcomings when the usage of AI to hit upon or reply to cyber threats.
  • 40% cite felony and moral implications as a problem to AI adoption.
  • 41.8% of businesses have confronted pushback from staff who don’t accept as true with AI choices, which SANS speculates is โ€œdue to lack of transparency.โ€
  • 43% of organizations these days use AI inside of their safety technique.
  • AI generation inside of safety operations is maximum incessantly utilized in anomaly detection methods (56.9%), malware detection (50.5%), and automatic incident reaction (48.9%).
  • 58% of respondents mentioned AI methods combat to hit upon new threats or reply to outlier signs, which SANS attributes to a loss of coaching information.
  • Of those that reported shortcomings with the usage of AI to hit upon or reply to cyber threats, 71% mentioned AI generated false positives.

Anthropic seeks enter from safety researchers on AI protection measures

Generative AI maker Anthropic expanded its trojan horse bounty program on HackerOne in August.

Specifically, Anthropic needs the hacker group to stress-test โ€œthe mitigations we use to prevent misuse of our models,โ€ together with looking to smash throughout the guardrails supposed to forestall AI from offering recipes for explosives or cyberattacks. Anthropic says it is going to award as much as $15,000 to those that effectively establish new jailbreaking assaults and can supply HackerOne safety researchers with early get entry to to its subsequent protection mitigation machine.

author avatar
roosho Senior Engineer (Technical Services)
I am Rakib Raihan RooSho, Jack of all IT Trades. You got it right. Good for nothing. I try a lot of things and fail more than that. That's how I learn. Whenever I succeed, I note that in my cookbook. Eventually, that became my blog.ย 
share this article.

ADVERTISEMENT

ADVERTISEMENT

Enjoying my articles?

Sign up to get new content delivered straight to your inbox.

Please enable JavaScript in your browser to complete this form.
Name