16.5 C
Canberra
Saturday, February 28, 2026

The Domains and Organizational Features of AI Safety


When your CISO mentions “AI safety” within the subsequent board assembly, what precisely do they imply? Are they speaking about defending your AI methods from assaults? Utilizing AI to catch hackers? Stopping staff from leaking knowledge to an unapproved AI service? Guaranteeing your AI doesn’t produce dangerous outputs?

The reply is perhaps “all the above”; and that’s exactly the issue.

AI grew to become deeply embedded in enterprise operations. Because of this, the intersection of “AI” and “safety” has turn into more and more advanced and complicated. The identical phrases are used to explain basically totally different domains with distinct goals, resulting in miscommunication that may derail safety methods, misallocate assets, and depart important gaps in safety. We’d like a shared understanding and shared language.

Jason Lish (Cisco’s Chief Info Safety Officer) and Larry Lidz (Cisco’s VP of Software program Safety) co-authored this paper with me to assist deal with this problem head-on. Collectively, we introduce a five-domain taxonomy designed to deliver readability to AI safety conversations throughout enterprise operations.

The Communication Problem

Think about this situation: your government workforce asks you to current the corporate’s “AI safety technique” on the subsequent board assembly. With no widespread framework, every stakeholder might stroll into that dialog with a really totally different interpretation of what’s being requested. Is the board asking about:

  • Defending your AI fashions from adversarial assaults?
  • Utilizing AI to boost your menace detection?
  • Stopping knowledge leakage to exterior AI providers?
  • Offering guardrails for AI output security?
  • Guaranteeing regulatory compliance for AI methods?
  • Defending towards AI-enabled or AI-generated cyber threats? This ambiguity results in very actual organizational issues, together with:
  • Miscommunication in government and board discussions
  • Misaligned vendor evaluations— evaluating apples to oranges
  • Fragmented safety methods with harmful gaps
  • Useful resource misallocation specializing in the improper goals

With no shared framework, organizations wrestle to precisely assess dangers, assign accountability, and implement complete, coherent AI safety methods.

The 5 Domains of AI Safety

We suggest a framework that organizes the AI-security panorama into 5 clear, deliberately distinct domains. Every addresses totally different issues, includes totally different menace actors, requires totally different controls, and sometimes falls beneath totally different organizational possession. The domains are:

  • Securing AI
  • AI for Safety
  • AI Governance
  • AI Security
  • Accountable AI

Every area addresses a definite class of dangerous and is designed for use along side the others to create a complete AI technique.

These 5 domains don’t exist in isolation; they reinforce and rely on each other and have to be deliberately aligned. Study extra about every area within the paper, which is meant as a place to begin for trade dialogue, not a prescriptive guidelines. Organizations are inspired to adapt and prolong the taxonomy to their particular contexts whereas preserving the core distinctions between domains.

Framework Alignment

Simply because the NIST Cybersecurity Framework offers a typical language to speak in regards to the domains of cybersecurity whereas not eradicating the necessity for detailed cybersecurity framework equivalent to NIST SP 800-53 and ISO 27001, this taxonomy will not be meant to work in isolation of extra detailed frameworks, however relatively to supply widespread vocabulary throughout trade.

As such, the paper builds on Cisco’s Built-in AI Safety and Security Framework just lately launched by my colleague Amy Chang. It additionally aligns with established trade frameworks, such because the Coalition for Safe AI (CoSAI) Threat Map, MITRE ATLAS, and others.

The intersection of AI and safety will not be a single drawback to unravel, however a constellation of distinct threat domains; every requiring totally different experience, controls, and organizational possession. By aligning with these domains with organizational context, organizations can:

  • Talk exactly about AI safety issues with out ambiguity
  • Assess threat comprehensively throughout all related domains
  • Assign accountability clearly to the correct groups
  • Make investments strategically relatively than reactively

Related Articles

LEAVE A REPLY

Please enter your comment!
Please enter your name here

[td_block_social_counter facebook="tagdiv" twitter="tagdivofficial" youtube="tagdiv" style="style8 td-social-boxed td-social-font-icons" tdc_css="eyJhbGwiOnsibWFyZ2luLWJvdHRvbSI6IjM4IiwiZGlzcGxheSI6IiJ9LCJwb3J0cmFpdCI6eyJtYXJnaW4tYm90dG9tIjoiMzAiLCJkaXNwbGF5IjoiIn0sInBvcnRyYWl0X21heF93aWR0aCI6MTAxOCwicG9ydHJhaXRfbWluX3dpZHRoIjo3Njh9" custom_title="Stay Connected" block_template_id="td_block_template_8" f_header_font_family="712" f_header_font_transform="uppercase" f_header_font_weight="500" f_header_font_size="17" border_color="#dd3333"]
- Advertisement -spot_img

Latest Articles