A current survey of 500 safety professionals by HackerOne, a safety analysis platform, discovered that 48% consider AI poses essentially the most vital safety danger to their group. Amongst their biggest considerations associated to AI embrace:
- Leaked coaching knowledge (35%).
- Unauthorized utilization (33%).
- The hacking of AI fashions by outsiders (32%).
These fears spotlight the pressing want for corporations to reassess their AI safety methods earlier than vulnerabilities turn into actual threats.
AI tends to generate false positives for safety groups
Whereas the total Hacker Powered Safety Report gained’t be accessible till later this fall, additional analysis from a HackerOne-sponsored SANS Institute report revealed that 58% of safety professionals consider that safety groups and risk actors might discover themselves in an “arms race” to leverage generative AI ways and methods of their work.
Safety professionals within the SANS survey mentioned they’ve discovered success utilizing AI to automate tedious duties (71%). Nevertheless, the identical contributors acknowledged that risk actors might exploit AI to make their operations extra environment friendly. Particularly, respondents “have been most involved with AI-powered phishing campaigns (79%) and automatic vulnerability exploitation (74%).”
SEE: Safety leaders are getting pissed off with AI-generated code.
“Safety groups should discover one of the best purposes for AI to maintain up with adversaries whereas additionally contemplating its current limitations — or danger creating extra work for themselves,” Matt Bromiley, an analyst on the SANS Institute, mentioned in a press launch.
The answer? AI implementations ought to bear an exterior evaluate. Greater than two-thirds of these surveyed (68%) selected “exterior evaluate” as the best method to determine AI security and safety points.
“Groups at the moment are extra sensible about AI’s present limitations” than they have been final 12 months, mentioned HackerOne Senior Options Architect Dane Sherrets in an e mail to TechRepublic. “People deliver plenty of essential context to each defensive and offensive safety that AI can’t replicate fairly but. Issues like hallucinations have additionally made groups hesitant to deploy the expertise in important techniques. Nevertheless, AI remains to be nice for rising productiveness and performing duties that don’t require deep context.”
Additional findings from the SANS 2024 AI Survey, launched this month, embrace:
- 38% plan to undertake AI inside their safety technique sooner or later.
- 38.6% of respondents mentioned they’ve confronted shortcomings when utilizing AI to detect or reply to cyber threats.
- 40% cite authorized and moral implications as a problem to AI adoption.
- 41.8% of corporations have confronted pushback from staff who don’t belief AI choices, which SANS speculates is “as a result of lack of transparency.”
- 43% of organizations at the moment use AI inside their safety technique.
- AI expertise inside safety operations is most frequently utilized in anomaly detection techniques (56.9%), malware detection (50.5%), and automatic incident response (48.9%).
- 58% of respondents mentioned AI techniques battle to detect new threats or reply to outlier indicators, which SANS attributes to a scarcity of coaching knowledge.
- Of those that reported shortcomings with utilizing AI to detect or reply to cyber threats, 71% mentioned AI generated false positives.
Anthropic seeks enter from safety researchers on AI security measures
Generative AI maker Anthropic expanded its bug bounty program on HackerOne in August.
Particularly, Anthropic needs the hacker neighborhood to stress-test “the mitigations we use to forestall misuse of our fashions,” together with attempting to interrupt by way of the guardrails meant to forestall AI from offering recipes for explosives or cyberattacks. Anthropic says it should award as much as $15,000 to those that efficiently determine new jailbreaking assaults and can present HackerOne safety researchers with early entry to its subsequent security mitigation system.