
Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More
Balancing the paradox of protecting one of the world’s leading travel, software and services businesses against the accelerating threats of AI illustrates why CISOs need to be steps ahead of the latest adversarial AI tradecraft and attack strategies.
As a leading global B2B travel platform, American Express Global Business Travel (Amex GBT) and its security team are doing just that, proactively confronting this challenge with a dual focus on cybersecurity innovation and governance. With deep roots in a bank holding company, Amex GBT upholds the highest data privacy standards, security compliance and risk management. This makes secure, scalable AI adoption a mission-critical priority.
Amex GBT Chief Information Security Officer David Levin is leading this effort. He is building a cross-functional AI governance framework, embedding security into every phase of AI deployment and managing the rise of shadow AI without stifling innovation. His approach offers a blueprint for organizations navigating the high-stakes intersection of AI advancement and cyber defense.
The following are excerpts from Levin’s interview with VentureBeat:
VentureBeat: How is Amex GBT using AI to modernize threat detection and SOC operations?
David Levin: We’re integrating AI across our threat detection and response workflows. On the detection side, we use machine learning (ML) models in our SIEM and EDR tools to spot malicious behavior faster and with fewer false positives. That alone accelerates how we investigate alerts. In the SOC, AI-powered automation enriches alerts with contextual data the moment they appear. Analysts open a ticket and already see critical details; there’s no longer a need to pivot between multiple tools for basic information.
AI also helps prioritize which alerts are likely urgent. Our analysts then spend their time on the highest-risk issues rather than sifting through noise. It’s a massive boost in efficiency. We can respond at machine speed where it makes sense, and let our skilled security engineers focus on complex incidents. Ultimately, AI helps us detect threats more accurately and respond faster.
VentureBeat: You also work with managed security partners like CrowdStrike OverWatch. How does AI serve as a force multiplier for both in-house and external SOC teams?
Levin: AI amplifies our capabilities in two ways. First, CrowdStrike OverWatch gives us 24/7 threat hunting augmented by advanced machine learning. They constantly scan our environment for subtle signs of an attack, including things we might miss if we relied on manual inspection alone. That means we have a top-tier threat intelligence team on call, using AI to filter out low-risk events and highlight real threats.
Second, AI boosts the efficiency of our internal SOC analysts. We used to manually triage far more alerts. Now, an AI engine handles that initial filtering. It can quickly distinguish suspicious from benign, so analysts only see the events that need human judgment. It feels like adding a smart virtual teammate. Our staff can handle more incidents, focus on threat hunting, and pick up advanced investigations. That synergy—human expertise plus AI support—drives better outcomes than either alone
VentureBeat: You’re heading up an AI governance framework at GBT, based on NIST principles. What does that look like, and how do you implement it cross-functionally?
Levin: We leaned on the NIST AI Risk Management Framework, which helps us systematically assess and mitigate AI-related risks around security, privacy, bias and more. We formed a cross-functional governance committee with representatives from security, legal, privacy, compliance, HR and IT. That team coordinates AI policies and ensures new projects meet our standards before going live.
Our framework covers the entire AI lifecycle. Early on, each use case is mapped against potential risks—like model drift or data exposure—and we define controls to address them. We measure performance through testing and adversarial simulations to ensure the AI isn’t easily fooled. We also insist on at least some level of explainability. If an AI flags an incident, we want to know why. Then, once systems are in production, we monitor them to confirm they still meet our security and compliance requirements. By integrating these steps into our broader risk program, AI becomes part of our overall governance rather than an afterthought.
VentureBeat: How do you handle shadow AI and ensure employees follow these policies?
Levin: Shadow AI emerged the moment public generative AI tools took off. Our approach starts with clear policies: Employees must not feed confidential or sensitive data into external AI services without approval. We outline acceptable use, potential risks, and the process for vetting new tools.
On the technical side, we block unapproved AI platforms at our network edge and use data loss prevention (DLP) tools to prevent sensitive content from being uploaded. If someone tries using an unauthorized AI site, they get alerted and directed to an approved alternative. We also rely heavily on training. We share real-world cautionary tales—like feeding a proprietary document into a random chatbot. That tends to stick with people. By combining user education, policy clarity and automated checks, we can curb most rogue AI usage while still encouraging legitimate innovation.
VentureBeat: In deploying AI for security, what technical challenges do you encounter, for example, data security, model drift, or adversarial testing?
Levin: Data security is a primary concern. Our AI often needs system logs and user data to spot threats, so we encrypt those feeds and restrict who can access them. We also make sure no personal or sensitive information is used unless it’s strictly necessary.
Model drift is another challenge. Attack patterns evolve constantly. If we rely on a model trained on last year’s data, we risk missing new threats. We have a schedule to retrain models when detection rates drop or false positives spike.
We also do adversarial testing, essentially red-teaming the AI to see if attackers could trick or bypass it. That might mean feeding the model synthetic data that masks real intrusions, or trying to manipulate logs. If we find a vulnerability, we retrain the model or add extra checks. We’re also big on explainability: if AI recommends isolating a machine, we want to know which behavior triggered that decision. That transparency fosters trust in the AI’s output and helps analysts validate it.
VentureBeat: Is AI changing the role of the CISO, making you more of a strategic business enabler than purely a compliance gatekeeper?
Levin: Absolutely. AI is a prime example of how security leaders can guide innovation rather than block it. Instead of just saying, “No, that’s too risky,” we’re shaping how we adopt AI from the ground up by defining acceptable use, training data standards, and monitoring for abuse. As CISO, I’m working closely with executives and product teams so we can deploy AI solutions that actually benefit the business, whether by improving the customer experience or detecting fraud faster, while still meeting regulations and protecting data.
We also have a seat at the table for big decisions. If a department wants to roll out a new AI chatbot for travel booking, they involve security early to handle risk and compliance. So we’re moving beyond the compliance gatekeeper image, stepping into a role that drives responsible innovation.
VentureBeat: How is AI adoption structured globally across GBT, and how do you embed security into that process?
Levin: We took a global center of excellence approach. There’s a core AI strategy team that sets overarching standards and guidelines, then regional leads drive initiatives tailored to their markets. Because we operate worldwide, we coordinate on best practices: if the Europe team develops a robust process for AI data masking to comply with GDPR, we share that with the U.S. or Asia teams.
Security is embedded from day one through “secure by design.” Any AI project, wherever it’s initiated, faces the same risk assessments and compliance checks before launch. We do threat modeling to see how the AI could fail or be misused. We enforce the same encryption and access controls globally, but also adapt to local privacy rules. This ensures that no matter where an AI system is built, it meets consistent security and trust standards.
VentureBeat: You’ve been piloting tools like CrowdStrike’s Charlotte AI for alert triage. How are AI co-pilots helping with incident response and analyst training?
Levin: With Charlotte AI we are offloading a lot of alert triage. The system instantly analyzes new detections, estimates severity and suggests next steps. That alone saves our tier-1 analysts hours every week. They open a ticket and see a concise summary instead of raw logs.
We can also interact with Charlotte, asking follow-up questions, including, “Is this IP address linked to prior threats?” This “conversational AI” aspect is a major help to junior analysts, who learn from the AI’s reasoning. It’s not a black box; it shares context on why it’s flagging something as malicious. The net result is faster incident response and a built-in mentorship layer for our team. We do maintain human oversight, especially for high-impact actions, but these co-pilots let us respond at machine speed while preserving analyst judgment.
VentureBeat: What do advances in AI mean for cybersecurity vendors and managed security service providers (MSSPs)?
Levin: AI is raising the bar for security solutions. We expect MDR providers to automate more of their front-end triage so human analysts can focus on the toughest problems. If a vendor can’t show meaningful AI-driven detection or real-time response, they’ll struggle to stand out. Many are embedding AI assistants like Charlotte directly into their platforms, accelerating how quickly they spot and contain threats.
That said, AI’s ubiquity also means we need to see past the buzzwords. We test and validate a vendor’s AI claims—“Show us how your model learned from our data,” or “Prove it can handle these advanced threats.” The arms race between attackers and defenders will only intensify, and security vendors that master AI will thrive. I fully expect new services—like AI-based policy enforcement or deeper forensics—emerging from this trend.
VentureBeat: Finally, what advice would you give CISOs starting their AI journey, balancing compliance needs with enterprise innovation?
Levin: First, build a governance framework early, with clear policies and risk assessment criteria. AI is too powerful to deploy haphazardly. If you define what responsible AI is in your organization from the outset, you’ll avoid chasing compliance retroactively.
Second, partner with legal and compliance teams upfront. AI can cross boundaries in data privacy, intellectual property, and more. Having them onboard early prevents nasty surprises later.
Third, start small but show ROI. Pick a high-volume security pain point (like alert triage) where AI can shine. That quick win builds credibility and confidence to expand AI efforts. Meanwhile, invest in data hygiene—clean data is everything to AI performance.
Fourth, train your people. Show analysts how AI helps them, rather than replaces them. Explain how it works, where it’s reliable and where human oversight is still required. A well-informed staff is more likely to embrace these tools.
Finally, embrace a continuous-improvement mindset. Threats evolve; so must your AI. Retrain models, run adversarial tests, gather feedback from analysts. The technology is dynamic, and you’ll need to adapt. If you do all this—clear governance, strong partnerships, ongoing measurement—AI can be an enormous enabler for security, letting you move faster and more confidently in a threat landscape that grows by the day.
VentureBeat: Where do you see AI in cybersecurity going over the next few years, both for GBT and the broader industry?
Levin: We’re heading toward autonomous SOC workflows, where AI handles more of the alert triage and initial response. Humans oversee complex incidents, but routine tasks get fully automated. We’ll also see predictive security—AI models that forecast which systems are most at risk, so teams can patch or segment them in advance.
On a broader scale, CISOs will oversee digital trust, ensuring AI is transparent, compliant with emerging laws and not easily manipulated. Vendors will refine AI to handle everything from advanced forensics to policy tuning. Attackers, meanwhile, will weaponize AI to craft stealthier phishing campaigns or develop polymorphic malware. That arms race makes robust governance and continuous improvement critical.
At GBT, I expect AI to permeate beyond the SOC into areas like fraud prevention in travel bookings, user behavior analytics and even personalized security training. Ultimately, security leaders who leverage AI thoughtfully will gain a competitive edge—protecting their enterprises at scale while freeing talent to focus on the most complex challenges. It’s a major paradigm shift, but one that promises stronger defenses and faster innovation if we manage it responsibly.