19.1 C
Canberra
Thursday, February 19, 2026

How CyberArk makes use of Apache Iceberg and Amazon Bedrock to ship as much as 4x assist productiveness


This put up is co-written with Moshiko Ben Abu, Software program Engineer at CyberArk.

CyberArk achieved as much as 95% discount in case decision time utilizing Amazon Bedrock and Apache Iceberg.

This enchancment addresses a problem in technical assist workflow: when a assist engineer receives a brand new buyer case, the most important bottleneck is commonly not diagnosing the issue however getting ready the info. Buyer logs arrive in numerous codecs from a number of distributors, and every new log format sometimes requires guide integration and correlation earlier than an investigation can start. For easy circumstances, this course of can take hours. For extra complicated investigations, it may take days, slowing decision and lowering total engineer productiveness.

CyberArk is a world chief in id safety. Centered on clever privilege controls, it supplies complete safety for human, machine, and AI identities throughout enterprise purposes, distributed workforces, and hybrid cloud environments.

On this put up, we present you ways CyberArk redesigned their assist operations by combining Iceberg’s clever metadata administration with AI-powered automation from Amazon Bedrock. You’ll discover ways to simplify information processing flows, automate log parsing for various codecs, and construct autonomous investigation workflows that scale robotically.

To realize these outcomes, CyberArk wanted an answer that would ingest buyer logs, robotically construction them, set up relationships between associated occasions, and make all the pieces queryable in minutes, not days. The structure needed to be serverless to deal with unpredictable assist volumes, safe sufficient to guard buyer Personally Identifiable Info (PII), and quick sufficient to permit similar day case decision.

The legacy structure: Bottlenecks and guide workflows

When assist engineers acquired buyer circumstances, they might add log recordsdata to the info lake saved in Amazon Easy Storage Service (Amazon S3). The unique design then suffered from the complexity of multi-step uncooked information processing.

First, CyberArk’s customized parsing logic working on AWS Fargate would parse these uploaded log recordsdata and rework the uncooked information. Throughout this stage, the system additionally needed to scan for PII and masks delicate information to guard buyer privateness.

Subsequent, a separate course of transformed the processed information into Parquet format.

Lastly, AWS Glue crawlers had been required to find new partitions and replace desk metadata for processed Parquet recordsdata. This dependency grew to become essentially the most complicated and time-consuming a part of the pipeline. Crawlers ran as asynchronous batch jobs fairly than in actual time, typically introducing delays of minutes to hours earlier than assist engineers may question the info.

However the inefficiency went deeper than simply architectural complexity. CyberArk helps prospects working various product environments throughout a number of distributors. Every vendor and product produces logs in numerous codecs with distinctive schemas, subject names, and buildings. Including assist for a brand new vendor meant days of integration work to know their log format and construct customized parsers.

CyberArk Legacy Logs Ingestion Flow

Determine 1: Legacy log ingestion structure diagram displaying the stream from S3 add by way of AWS Fargate processing with AWS Glue Crawler

Past ingestion, the investigation course of itself was guide and time consuming. Help engineers would manually question information, correlate occasions throughout totally different log sources, search by way of product documentation, and piece collectively root trigger evaluation by way of trial and error. This course of required deep product experience and will take hours or days relying on subject complexity. The brand new structure addresses these inefficiencies by way of three key improvements:

  1. Single stage serverless processing: AWS Fargate with PyIceberg instantly creates Iceberg tables from uncooked logs in a single move, eradicating intermediate processing steps and crawler dependencies completely.
  2. AI powered dynamic parsing: Amazon Bedrock robotically generates grok patterns for log parsing by analyzing file schemas, remodeling what was as soon as a guide, time consuming course of into a completely automated workflow.
  3. Autonomous investigation with AI Brokers: AI Brokers autonomously carry out full root trigger evaluation by querying log information, analyzing product information bases, figuring out occasion flows, and recommending options, remodeling hours of guide investigation into minutes of automated intelligence.

The answer: AI-powered automation meets single-stage Iceberg processing

The brand new system delivers zero contact log processing from add to question. Help engineers merely add buyer log ZIP recordsdata to the system. Right here’s the place the transformation occurs: CyberArk’s customized processing logic nonetheless runs on AWS Fargate, however now it makes use of Amazon Bedrock to intelligently perceive the info.

Zero-touch log processing workflow

The system extracts pattern log entries from the uploaded log recordsdata and sends them to Amazon Bedrock together with context in regards to the log supply and desk schema from AWS Glue Knowledge Catalog. Amazon Bedrock analyzes the samples, understands the construction, and robotically generates grok patterns optimized for the precise log format.

Grok patterns are structured expressions that outline extract significant fields from unstructured log textual content. For instance, the next grok sample specifies {that a} timestamp seems first, adopted by a severity stage, then a message physique %{TIMESTAMP_ISO8601:timestamp} %{LOGLEVEL:severity} %{GREEDYDATA:message}

The system validates these grok patterns towards extra samples to confirm accuracy earlier than making use of them to parse the whole log file. Efficiently validated grok patterns are saved in Amazon DynamoDB, making a repository of identified patterns. When the system encounters related log codecs in future uploads, it may retrieve these patterns instantly from Amazon DynamoDB, avoiding redundant grok sample era. Amazon Bedrock processes log samples in real-time with out retaining buyer information or utilizing it for mannequin coaching, sustaining information privateness.

This whole course of invokes Claude 3.7 Sonnet mannequin from Amazon Bedrock and is orchestrated by AWS Fargate duties with retry logic for reliability. The processing makes use of these AI-generated grok patterns to parse the logs and create or replace Iceberg tables utilizing PyIceberg APIs with out human intervention.

This automation diminished logs onboarding time from days to minutes, enabling CyberArk to deal with various buyer environments with out guide intervention.

Figure 2: Log ingestion architecture diagram showing the flow from S3 upload through AWS Fargate processing with Amazon Bedrock integration to Iceberg table creation

Determine 2: Log ingestion structure diagram displaying the stream from S3 add by way of AWS Fargate processing with Amazon Bedrock integration to Iceberg desk creation

Apache Iceberg: Simplified structure, sooner queries

Iceberg simplified and improved CyberArk’s information lake structure by addressing the 2 main bottlenecks within the legacy system: gradual schema administration and inefficient question efficiency.

Constructed-in schema evolution removes crawler dependency

Within the legacy structure, AWS Glue crawlers grew to become a supply of operational overhead and latency. Even when triggered on demand, crawlers ran as batch jobs over S3 prefixes to find partitions and replace metadata. As information volumes grew and datasets diversified throughout distributors and schemas, groups needed to handle and function a rising variety of crawler jobs. The ensuing delays, typically starting from minutes to hours, slowed information availability and downstream investigation workflows.

Iceberg removes this whole layer of complexity. Iceberg’s clever metadata layer robotically tracks desk construction, schema adjustments, and partition data as information is written. When CyberArk’s processing creates or updates Iceberg tables by way of PyIceberg, the metadata is up to date immediately and atomically. There’s no ready for crawlers jobs to finish, and no threat of stale metadata. The second information is written, it’s instantly queryable in Amazon Athena.

PyIceberg: Making Iceberg accessible past Apache Spark

Working with Iceberg normally concerned Apache Spark and the complexity of distributed information processing. PyIceberg modified that by letting CyberArk create and handle Iceberg tables utilizing a easy Python library. CyberArk’s information engineers may write easy Python code working on AWS Fargate to create Iceberg tables instantly from parsed logs, with out spinning up Spark clusters.

This accessibility was important for CyberArk’s serverless structure. PyIceberg enabled single stage processing the place AWS Fargate duties may parse logs, apply PII masking, and create Iceberg tables in a single move. The consequence was easier code and decrease operational overhead.

Metadata-driven question optimization delivers pace

Along with eradicating crawlers, Iceberg considerably improved question efficiency by way of its clever metadata structure. Iceberg maintains detailed statistics about information recordsdata, together with min/max values, null counts, and partition data. When assist engineers question information in Athena, Iceberg’s metadata layer helps partition pruning and file skipping, ensuring queries solely learn the precise recordsdata containing related information. For CyberArk’s use case, the place tables are partitioned by case ID, this implies a question for a selected assist case solely reads the recordsdata for that case, ignoring probably hundreds of irrelevant recordsdata. This metadata pushed optimization diminished question execution time from minutes to seconds, permitting assist engineers to interactively discover information fairly than ready for outcomes.

ACID transactions keep information consistency

In a multi consumer assist atmosphere the place a number of engineers could also be analyzing overlapping circumstances or importing logs concurrently, information consistency is crucial. Iceberg’s ACID transaction assist helps confirm that concurrent writes don’t corrupt information or create inconsistent states. Every desk replace is atomic, remoted, and sturdy, offering the reliability CyberArk wanted for manufacturing assist operations.

Time journey permits historic evaluation

Iceberg’s built-in versioning permits assist engineers to question historic states of information, important for understanding how buyer points developed over time. If an engineer must see what the logs seemed like when a case was first opened versus after a buyer utilized a patch, Iceberg’s time journey capabilities make this easy. This function proved important for complicated troubleshooting situations the place understanding the timeline of occasions was vital to decision.

Automated desk optimization with AWS Glue

Iceberg tables require periodic upkeep to take care of question efficiency.

CyberArk enabled AWS Glue automated desk optimization for his or her Iceberg tables, which handles compaction and expired snapshot cleanup within the background.

For CyberArk’s steady add workflow, this automation avoids efficiency degradation over time. Tables keep optimized with out guide intervention from the engineering crew.

AI Brokers: Autonomous investigation workflow

Whereas the Claude 3.7 Sonnet mannequin from Amazon Bedrock automates grok sample era for log ingestion, the extra superior use of Amazon Bedrock comes within the investigation workflow. We use AI brokers with Bedrock fashions to alter how assist engineers analyze and resolve buyer points.

From guide evaluation to AI powered investigation

Within the legacy workflow, assist engineers would manually question information, correlate occasions throughout totally different log sources, search by way of product documentation, and piece collectively root trigger evaluation by way of trial and error. This course of required deep product experience and will take hours or days relying on subject complexity. AI Brokers automate this whole investigation course of. Help engineers use an inside portal to ask questions in pure language about buyer points, questions like

“Present me authentication errors for case 12345 within the final 24 hours”, “What had been the commonest errors throughout circumstances opened this week?” or “Examine the error patterns between case 12345 and case 12346.”

Behind the scenes, the system fires specialised AI Brokers that autonomously carry out thorough evaluation.

How assist brokers work

Every AI Agent operates as an clever investigator with a transparent mission: perceive what occurred, decide why it occurred, and suggest repair it. When a assist engineer asks a query, the agent collects related information by querying Athena to retrieve log information from Iceberg tables, filtering for the precise case and time interval related to the investigation. The agent then accesses CyberArk’s inside information base for the precise product concerned, understanding identified points, frequent error patterns, and documented options. The agent then performs the next evaluation:

  • Move identification: Analyzes the sequence of occasions within the logs to know what truly occurred in the course of the buyer’s subject
  • Root trigger willpower: Correlates log occasions with product information to determine the underlying reason for the issue
  • Resolution suggestions: Suggests particular remediation steps based mostly on the basis trigger evaluation and identified decision patterns

This whole course of occurs in minutes, delivering superior evaluation that will have taken assist engineers hours to carry out manually.

For complicated circumstances the place an answer will not be discovered, the assist agent escalates to a different, specialised agent that interacts with service engineers to gather extra inputs and experience. This human-in-the-loop method makes positive that even essentially the most difficult circumstances obtain applicable consideration whereas nonetheless benefiting from the automated investigation workflow. The insights gathered from these escalated circumstances are robotically fed again into CyberArk’s information base, constantly enhancing the system’s potential to deal with related points autonomously sooner or later.

Amazon Bedrock by no means shares buyer information with mannequin suppliers or makes use of it to coach basis fashions, case information and investigation insights stay inside CyberArk’s atmosphere.

Concurrent agent execution at scale

When a number of assist engineers examine totally different circumstances concurrently, the answer runs specialised brokers concurrently. CyberArk presently makes use of Claude 3.7 Sonnet as the inspiration mannequin for these brokers. Every agent works independently on its assigned investigation, working in parallel with out useful resource competition. This concurrent execution permits the investigation workflow to scale robotically with assist quantity, dealing with peak masses with out efficiency degradation.

AI-powered investigation benefit

This AI-powered investigation workflow delivers two key benefits.

Investigations that took hours now full in minutes, enabling assist engineers to resolve as much as 4x extra circumstances per day.

The system additionally creates a steady studying suggestions loop. When circumstances require guide decision by engineers, these resolutions are robotically recorded and fed again into the information base. Future investigations profit from this gathered experience, with brokers making use of classes realized from earlier guide resolutions to related circumstances. Amazon Bedrock doesn’t use buyer information to coach basis fashions. Case information and investigation insights stay inside CyberArk’s atmosphere.

This automated suggestions mechanism means the investigation workflow turns into simpler over time, constantly enhancing decision accuracy and pace.

CyberArk - AI Powered Logs Investigation Flow

Determine 3: Investigation workflow diagram displaying pure language question by way of AI Brokers to Athena queries and information base evaluation

Scaling with out proportional engineering progress

The enterprise affect of this AI automation is important. CyberArk can increase its vendor protection and product portfolio with out including information engineering headcount. The identical system that handles at this time’s log varieties will robotically deal with tomorrow’s additions, whether or not that’s ten new codecs or hundreds, considerably lowering time to marketplace for new product and vendor integrations.

The outcomes: Important enhancements in decision time and productiveness

The transformation delivered measurable enhancements throughout each key metric.

Decision time: CyberArk achieved as much as 95% discount in time from case task to decision. Easy circumstances that used to take 4 to six hours now take simply 15 to half-hour. Advanced circumstances that beforehand took as much as 15 days at the moment are accomplished in 2 to 4 hours.

Engineer productiveness: Help engineers now deal with 8 to 12 circumstances per day, in comparison with simply 2 to three circumstances earlier than. This implies every engineer helps as much as 4x extra prospects.

Knowledge availability: Logs are queryable inside minutes of add as an alternative of ready hours or days. Help engineers can begin investigating points virtually instantly after receiving buyer information.

Operational effectivity: The system requires zero guide intervention for brand new log codecs or schema adjustments. Instances that used to require days of information engineering work now occur robotically.

Value optimization: The serverless structure alleviated idle infrastructure prices whereas scaling robotically with demand. CyberArk solely pays for what they use, once they use it.

Buyer satisfaction: Sooner decision occasions and proactive subject identification considerably improved the client expertise. Issues get solved in hours as an alternative of days, and prospects spend much less time ready for solutions.

What’s subsequent?

Whereas AWS continues to innovate throughout each information lake administration and agentic AI infrastructure, the next capabilities align effectively with CyberArk’s structure and should provide extra operational advantages because the system scale.

Agent infrastructure maturity

Because the agent-based structure scales to deal with hundreds of concurrent investigations, CyberArk is transitioning to Amazon Bedrock AgentCore for future agent deployments. AgentCore supplies a managed runtime for manufacturing AI brokers with enhanced observability by way of AWS X-Ray integration, clever reminiscence for context retention throughout periods, and streamlined operational workflows. Whereas the present AI Brokers implementation delivers the efficiency and reliability CyberArk wants at this time, AgentCore represents a pure evolution path as operational necessities develop, providing framework-agnostic deployment, automated scaling, and complete monitoring capabilities with out infrastructure administration overhead.

Amazon S3 Tables

CyberArk’s present structure makes use of Iceberg tables saved in Amazon S3 buckets. Amazon S3 Tables presents totally managed Iceberg tables with built-in optimization.

As CyberArk proceed to scale with tons of of Iceberg tables and speedy information progress, CyberArk is exploring a migration to Amazon S3 Tables to additional cut back operational overhead.

S3 Tables take away the necessity to arrange and monitor AWS Glue upkeep jobs. It robotically performs upkeep to boost the efficiency of Iceberg tables, together with unreferenced file removing, file compaction, and snapshot administration. Moreover, S3 Tables supplies Clever-Tiering that robotically strikes information between storage courses based mostly on entry patterns, optimizing storage prices with out guide intervention.

As a result of S3 Tables makes use of Iceberg open desk format, migration wouldn’t require adjustments to present Athena queries and PyIceberg code. This flexibility permits CyberArk to guage and undertake S3 Tables when the operational and value advantages align with their enterprise wants.

Conclusion

CyberArk’s transformation demonstrates how combining trendy information lake structure with AI automation can considerably change operational economics. By combining Iceberg’s clever metadata administration with AI-powered automation from Amazon Bedrock, CyberArk reworked case decision from days to minutes whereas enabling assist operations to scale robotically with enterprise progress. Help engineers now spend their time fixing buyer issues as an alternative of wrangling information, prospects obtain sooner resolutions, and the system scales robotically with the enterprise.

To be taught extra about Iceberg on AWS, check with Working with Amazon S3 Tables and desk buckets and Utilizing Apache Iceberg on AWS. To be taught extra about Amazon Bedrock AgentCore, check with Amazon Bedrock AgentCore.


Concerning the authors

Moshiko Ben Abu

Moshiko Ben Abu

Moshiko is a Software program Engineer at CyberArk, specializing in architecting cloud-native purposes and constructing AI-powered options. Moshiko advocates for a shift-left method the place safety is inbuilt from day one. His drive for innovation has been acknowledged throughout the corporate, incomes him the Innovator tradition award at CyberArk’s World Kickoff.

Riki Nizri

Riki Nizri

Riki is a Options Architect at AWS. Collaborating with AWS ISV prospects, Riki helps them leverage AWS providers to construct trendy, environment friendly options that drive measurable enterprise outcomes.

Sofia Zilberman

Sofia Zilberman

Sofia works as a Senior Streaming Options Architect at AWS, serving to prospects design and optimize real-time information pipelines utilizing open-source applied sciences like Apache Flink, Kafka, and Apache Iceberg. With expertise in each streaming and batch information processing, she focuses on making information workflows environment friendly, observable, and high-performing.

Related Articles

LEAVE A REPLY

Please enter your comment!
Please enter your name here

[td_block_social_counter facebook="tagdiv" twitter="tagdivofficial" youtube="tagdiv" style="style8 td-social-boxed td-social-font-icons" tdc_css="eyJhbGwiOnsibWFyZ2luLWJvdHRvbSI6IjM4IiwiZGlzcGxheSI6IiJ9LCJwb3J0cmFpdCI6eyJtYXJnaW4tYm90dG9tIjoiMzAiLCJkaXNwbGF5IjoiIn0sInBvcnRyYWl0X21heF93aWR0aCI6MTAxOCwicG9ydHJhaXRfbWluX3dpZHRoIjo3Njh9" custom_title="Stay Connected" block_template_id="td_block_template_8" f_header_font_family="712" f_header_font_transform="uppercase" f_header_font_weight="500" f_header_font_size="17" border_color="#dd3333"]
- Advertisement -spot_img

Latest Articles