Jeff Gardner, CISO at Germantown Technologies, comes to Hacker Valley Studio this week to talk about the future of cybersecurity and what up-and-coming hackers may encounter on their journey into an ever-evolving industry. With a specific focus and interest in artificial intelligence, or AI, Jeff’s discussion in this episode covers the current perception of AI in tech, the timeline of when we may see highly-intelligent AI come into play, and what the future of AI looks like from a cybersecurity standpoint.
Timecoded Guide:
[03:54] Focusing on numerous areas during his day job as CISO and understanding the necessity of a strong team of trusted cyber professionals
[09:00] Getting excited about current and upcoming technology in cyber while remaining realistic about present day limitations and needs
[15:53] Automating security analyst tasks and finding the quality control balance between machine knowledge and human intuition
[22:50] Breaking down the concept of “bad AI” and understanding how to address the issues that may arise if AI is used for nefarious purposes
[28:22] Addressing the future of unique thought and creativity for computers and for human beings
Sponsor Links:
Thank you to our sponsors Axonius and AttackIQ for bringing this episode to life!
Want to learn more about how Mindbody enhanced their asset visibility and increased their cybersecurity maturity rating with Axonius? Check out axonius.com/mindbody
AttackIQ - better insights, better decisions, and real security outcomes. Be sure to check out the Attack IQ Academy for free cybersecurity training, featuring Ron and Chris of Hacker Valley Studio, at academy.attackiq.com
What are some of the things that you are expecting the next generation to be doing when it comes to bypassing security in a way that they won't get caught?
Jeff, like many hackers and security pros in the industry, started his journey in cyber by hacking different systems from his own computer as a kid just because he could get away with it. While that type of hacking still exists, there are new ways for systems to manage and counteract these threats and attacks, as well as expose who is behind it. The new generation of hackers will learn in different ways on different technology, and Jeff is confident that what they choose will come because of where the security industry is already going, with devices that use machine learning and pattern learning, as well as the continuing development of AI.
“When it comes to artificial intelligence and all the myriad of models and neurons and all that, we're still pretty much at single neuron, maybe double neuron systems. But, as things evolve, it's gonna be harder and harder to bypass those defenses.”
What is your perspective of AI not being here and available for us yet?
In Jeff’s opinion, the biggest thing missing from our current AI to really make it the intelligence we claim it is, is creativity. We have smart technology, we have technology that can automate tasks and can be told very easily what to do, all through feeding in data and processes. However, Jeff points out that most of what we call artificial intelligence in the cyber and tech industries doesn’t have the creativity or the human intuition to match the human brain. We’re in an exciting escalation of technology and intelligence, but we aren’t at true AI yet.
“I think one of the things that's missing from AI, and it's being solved rapidly, is creativity. We train it through models, but those models are only the data that we give it. How smart is the system if you just give it a plethora of data and have it come to its own conclusions?”
How far away do you think we are from highly intelligent AI?
Although the futuristic AI that appears in science fiction movies and books isn’t here yet, Jeff believes we aren’t far off from a level of computer technology that we have never seen before. With the quantum leaps in technology that we’ve continued to see, namely in computers starting to solve math problems we’ve never even thought of or engage with art in a way we’ve never dreamed possible. What we see now is the tip of the iceberg, but the future holds massive potential for what AI will look like and what automation of certain tasks will look like, with accuracy rates for analysis technology continuing to narrow to 99.9% accuracy rates.
“When you can get to that level of processing speed, you can do things we can't even dream of, and that's what they're doing now. They're solving math problems in ways that humans have never thought of, they're creating art in ways that humans couldn't imagine.”
How do we create AI for good?
The fear of the “evil” or “bad” artificial intelligence comes up frequently when we discuss what the future of AI may look like from a security standpoint. However, Jeff is confident that the issue is not as black and white as our fears make it. For starters, when we understand the purpose behind what “bad” AI might be programmed to do, we can put other measures in place to combat it. On the other hand, the struggle of good vs bad, right vs wrong has been a problem in hacking and in cyber since the first white hats and black hats came into existence. The fear of bad AI is a philosophical discussion instead of just a technical conversation.
“I think it all comes down to, like you said, purpose. What's the purpose of the bad AI? What's it trying to do? Is it trying to hack our systems and steal the data? Is it trying to cause physical harm?”
---------------
Links:
Stay in touch with Jeff Gardner on LinkedIn
Connect with Ron Eddings on LinkedIn and Twitter
Connect with Chris Cochran on LinkedIn and Twitter
Purchase a HVS t-shirt at our shop
Continue the conversation by joining our Discord
Check out Hacker Valley Media and Hacker Valley Studio