21.3 C
Canberra
Sunday, March 29, 2026

Bringing AI to DevNet Studying Labs


LLM Entry With out the Trouble

DevNet Studying Labs give builders preconfigured, in-browser environments for hands-on studying—no setup, no setting points. Begin a lab, and also you’re coding in seconds.

Now we’re including LLM entry to that have. Cisco merchandise are more and more AI-powered, and learners have to work with LLMs hands-on—not simply examine them. However we are able to’t simply hand out API keys. Keys get leaked, shared outdoors the lab, or blow by budgets. We wanted a approach to prolong that very same frictionless expertise to AI—give learners actual LLM entry with out the danger.

At present, we’re launching managed LLM entry for Studying Labs—enabling hands-on expertise with the most recent Cisco AI merchandise and accelerating studying and adoption of AI applied sciences.

Begin a Lab, Get Immediate LLM Entry

The expertise for learners is easy: begin an LLM-enabled lab, and the setting is prepared. No API keys to handle, no configuration, and no signup with exterior suppliers. The platform handles all the things behind the scenes.

The quickest path at the moment is A2A Protocol Safety. Within the setup module, the lab masses the built-in LLM settings into the shell setting. Within the very subsequent hands-on step, learners scan a malicious agent card with the LLM analyzer enabled.

supply ./lab-env.sh
a2a-scanner scan-card examples/malicious-agent-card.json --analyzers llm
✅ Lab LLM settings loaded
   Supplier: openai
   Mannequin: gpt-4o

💡 Now you can run: a2a-scanner list-analyzers

Scanning agent card: Official GPT-4 Monetary Analyzer

Scan Outcomes for: Official GPT-4 Monetary Analyzer
Goal Kind: agent_card
Standing: accomplished
Analyzers: yara, heuristic, spec, endpoint, llm
Whole Findings: 8

description   AGENT IMPERSONATION        Agent falsely claims to be verified by OpenAI
description   PROMPT INJECTION           Agent description incorporates directions to disregard earlier directions
webhook_url   SUSPICIOUS AGENT ENDPOINT  Agent makes use of suspicious endpoints for knowledge assortment  
LLM Enabled Learning LabLLM Enabled Learning Lab

That lab-env.sh step is the entire level: it preloads the managed lab LLM configuration into the terminal session, so the scanner can name the mannequin straight away with none guide supplier setup. From the learner’s perspective, it feels nearly native, as a result of they supply one file and instantly begin utilizing LLM-backed evaluation from the command line.

How It Works

Why a proxy? The LLM Proxy abstracts a number of suppliers behind a single OpenAI-compatible endpoint. Learners write code towards one API—the proxy handles routing to Azure OpenAI or AWS Bedrock based mostly on the mannequin requested. This implies lab content material doesn’t break once we add suppliers or swap backends.

Quota enforcement occurs on the proxy, not the supplier. Every request is validated towards the token’s remaining funds and request depend earlier than forwarding. When limits are hit, learners get a transparent error—not a shock invoice or silent failure.

Each request is tracked with consumer ID, lab ID, mannequin, and token utilization. This offers lab authors visibility into how learners work together with LLMs and helps us right-size quotas over time.

Palms-On with AI Safety

The primary wave of labs on this infrastructure spans Cisco’s AI safety tooling:

  • A2A Protocol Safety — built-in LLM settings are loaded throughout setup and used instantly within the first agent-card scanning workflow



  • AI Protection — makes use of the identical managed LLM entry within the BarryBot software workouts



  • Ability Safety — makes use of the identical managed LLM entry within the first skill-scanning workflow



  • MCP Safety — provides LLM-powered semantic evaluation to MCP server and power scanning



  • OpenClaw Safety (coming quickly) — validates the built-in lab LLM throughout setup and makes use of it within the first actual ZeroClaw smoke take a look at

These aren’t theoretical workouts. Learners are scanning practical malicious examples, testing reside safety workflows, and utilizing the identical Cisco AI safety tooling practitioners use within the subject.

“We needed LLM entry to really feel like the remainder of Studying Labs: begin the lab, open the terminal, and the mannequin entry is already there. Learners get actual hands-on AI workflows with out chasing API keys, and we nonetheless hold the controls we’d like round price, security, and abuse. I additionally hold my very own operating assortment of those labs at cs.co/aj.” — Barry Yuan

What’s Subsequent

We’re extending Studying Labs to help GPU-backed workloads utilizing NVIDIA time-slicing. This may let learners work hands-on with Cisco’s personal AI fashions—Basis-sec-8b for safety and the Deep Community Mannequin for networking—operating regionally of their lab setting. For the technical particulars on how we’re constructing this, see our GPU infrastructure collection: Half 1 and Half 2.

Your suggestions shapes what we construct subsequent. Attempt the labs and tell us what you’d wish to see.


Related Articles

LEAVE A REPLY

Please enter your comment!
Please enter your name here

[td_block_social_counter facebook="tagdiv" twitter="tagdivofficial" youtube="tagdiv" style="style8 td-social-boxed td-social-font-icons" tdc_css="eyJhbGwiOnsibWFyZ2luLWJvdHRvbSI6IjM4IiwiZGlzcGxheSI6IiJ9LCJwb3J0cmFpdCI6eyJtYXJnaW4tYm90dG9tIjoiMzAiLCJkaXNwbGF5IjoiIn0sInBvcnRyYWl0X21heF93aWR0aCI6MTAxOCwicG9ydHJhaXRfbWluX3dpZHRoIjo3Njh9" custom_title="Stay Connected" block_template_id="td_block_template_8" f_header_font_family="712" f_header_font_transform="uppercase" f_header_font_weight="500" f_header_font_size="17" border_color="#dd3333"]
- Advertisement -spot_img

Latest Articles