AI is embedded in nearly every part of our lives. And from the look of things, it’s not slowing down.
Presently, 78% of global companies use AI in at least one business function, while 71% use generative AI which is a jump from 20% in 2017.
While this growth is impressive, it also comes with risks. That’s why we hosted a three-part webinar series to help people and organizations understand and get ahead of the challenges. We brought together top cybersecurity professionals and executives to break down how AI is impacting software development, supply chain security, and governance.
If you missed the sessions, you can watch the replays here.
In this blog post, we'll be doing a quick breakdown of the biggest takeaways:
Webinar Day 1: Vibe Coding – Managing Risks in AI-Assisted Software Development
AI has changed the way developers work - Tools like ChatGPT and GitHub Copilot help developers breeze through tasks that once took several hours; but that speed can hide serious flaws.
Gartner predicts that by 2028, 40% of new enterprise software will be built using AI-assisted development. That’s a huge shift and a major risk if done carelessly.
Zino Adidi, our Day 1 Panelist who is a Head of Software Engineering managing 50 software engineers, noted that “the vibe coder may be the biggest risk.” This is because, if the developer does not have adequate security knowledge, coding with AI copilots will introduce numerous security vulnerabilities into the product.
The risks discussed in this session include:
- Over-reliance on AI by non-technical staff
- Lack of manual validation before code goes live
- A false sense of confidence because AI sounds authoritative
Teams were advised to “focus on people, processes, and technology while defining principles, frameworks, and standards for using AI.”
Click here to watch the replay
Day 1 Takeaway:
AI makes development faster. But without the right checks, security risks accumulate quickly too.
Webinar Day 2: Navigating Supply Chain Risks with AI Vendors
With the proliferation of AI tools, organizations face an extension of their attack surfaces. A Lehigh University survey showed that concerns about generative AI rose considerably in Q2 2024 among supply chain managers.
The need for validation of vendor claims was strongly emphasized by one of our panelists:
“When I send third-party assessment questionnaires, vendors don’t just say ‘yes’ or ‘no.’ They go beyond that, adding reasons, screenshots, and evidence…
It’s up to us not to be lazy. We need to review their answers carefully, ask questions, even request screen shares if needed.”
Interestingly, some organizations believe they’re safe just because they don’t use generative AI directly. But we challenged that mindset, highlighting the exposure one can get from 4th parties.
The session also brought up ‘Shadow AI’: the quiet use of unauthorized AI tools - something as simple as an AI-powered browser extension, or pasting sensitive data into ChatGPT for a quick reply.
Click here to watch the replay
Day 2 Takeaway:
Keep a close eye on how AI is being used across your organization — even in unofficial capacities. Engage vendors in conversations about how their models operate, monitoring controls they’ve implemented and prioritize validation of vendor claims.
Webinar Day 3: AI Security Governance
AI governance is no longer a future concern.
Panelist Gbolabo Awelewa, Chief Business Officer at eSentry made it clear that:
“If we don't embed governance into our AI systems as they’re being developed, deployed, or trained, we're simply scaling risk at the speed of innovation.”
Charles Onochie, Vice President of Global Information Security at GAN, also shared that potential AI Security risks include data leakage and bias and commented that implementing input-output monitoring from the onset protects against reputational risk. So, governance shouldn’t be a roadblock, but will help teams stay on the right track.
The panelists advised listeners to:
- Embed accountability in AI development
- Set clear policies on data use, access, and transparency
- Assign ownership (ideally the CISO, not just a committee)
For the listeners who asked about how to approach implementing AI security governance from scratch, the panelists had this recommendation:
1. Map AI usage across departments (HR, marketing, IT, legal)
2. Conduct risk assessments beyond cybersecurity – including legal and business impact
3. Define ownership, acceptable use, and response paths
Click here to watch the replay
Day 3 Takeaway:
Start small, assign ownership, and implement risk-based AI security governance.
Final Thoughts
AI is here to stay; and so are the risks. Don’t wait to react, be proactive. Rewatch the Cyberkach AI Security Webinar sessions, start mapping how you use AI in your organization, and put the right controls in place.
Got questions? Contact us and subscribe to the Cyberkach blog for more insights.