Craig Dunham is the CEO of Voltron Data, a company specializing in GPU-accelerated data infrastructure for large-scale analytics, AI, and machine learning workloads. Before joining Voltron Data, he served as CEO of Lumar, a SaaS technical SEO platform, and held executive roles at Guild Education and Seismic, where he led the integration of Seismic’s acquisition of The Savo Group and drove go-to-market strategies in the financial services sector. Craig began his career in investment banking with Citi and Lehman Brothers before transitioning into technology leadership roles. He holds a MBA from Northwestern University and a BS from Hampton University.
Here’s a glimpse of what you’ll learn:
- [03:22] Craig Dunham explains how Voltron Data reduces the cost of data processing
- [04:46] Why acceleration across the data stack is essential for modern AI workflows and agentic computing
- [07:30] How a major retailer cut inventory forecasting time using Voltron Data
- [10:17] Why GPUs are better suited for large-scale data analytics
- [13:43] Common problems Voltron Data solves for massive datasets
- [17:21] What onboarding with Voltron Data looks like and how open-source compatibility makes integration easier
- [21:12] How Voltron Data prices its service based on terabytes scanned
- [26:47] A global bank’s use of Voltron to detect anomalous trading behavior across worldwide exchanges
- [33:24] Craig’s journey from investment banking to SaaS leadership
- [41:44] Lessons Craig learned about leadership, tough personnel decisions, and aligning talent with company needs
In this episode…
In a world where efficiency and speed are paramount, how can companies quickly process massive amounts of data without breaking the bank on infrastructure and energy costs? With the rise of AI and increasing data volumes from everyday activities, organizations face a daunting challenge: achieving fast and cost-effective data processing. Is there a solution that can transform how businesses handle data and unlock new possibilities?
Craig Dunham, a B2B SaaS leader with expertise in go-to-market strategy and enterprise data systems, tackles these challenges head-on by leveraging GPU-accelerated computing. Unlike traditional CPU-based systems, Voltron Data’s technology uses GPUs to greatly enhance data processing speed and efficiency. Craig shares how their solution helps enterprises reduce processing times from hours to minutes, enabling organizations to run complex analytics faster and more cost-effectively. He emphasizes that Voltron Data’s approach doesn’t require a complete overhaul of existing systems, making it a more accessible option for businesses seeking to enhance their computing capabilities.
In this episode of the Inspired Insider Podcast, Dr. Jeremy Weisz interviews Craig Dunham, CEO at Voltron Data, about building high-performance data systems. Craig delves into the challenges and solutions in today’s data-driven business landscape, how Voltron Data’s innovative solutions are revolutionizing data analytics, and the advantages of using GPU over CPU for data processing. He also shares valuable lessons on leading high-performing teams and adapting to market demands.
Resources mentioned in this episode:
- Craig Dunham on LinkedIn
- Voltron Data
- Predictably Irrational: The Hidden Forces That Shape Our Decisions by Dr. Dan Ariely
- What It Takes: Lessons in the Pursuit of Excellence by Stephen A. Schwarzman
- How I Built This
- Masters of Scale
- The Hard Thing About Hard Things: Building a Business When There Are No Easy Answers―Straight Talk on the Challenges of Entrepreneurship by Ben Horowitz
- The Five Dysfunctions of a Team: A Leadership Fable by Patrick Lencioni
- Dare to Lead: Brave Work. Tough Conversations. Whole Hearts. by Brené Brown
- Death by Meeting: A Leadership Fable…About Solving the Most Painful Problem in Business by Patrick Lencioni
Special Mentions:
Related Episodes:
- “Automation Solutions with Wade Foster Founder of Zapier” on the Inspired Insider Podcast
- “[SaaS Series] Tips To Thrive in the SaaS Space With Sujan Patel” on the Inspired Insider Podcast
- “Pipedrive: Brain Surgery, Married, & Moved Company from Estonia to U.S. All at Once – with Urmas Purde [Inspiration]” on the Inspired Insider Podcast
- “Building Wealth and Scaling Strategies With Richard Wilson of Family Office Club” on the Inspired Insider Podcast
Quotable Moments:
- “We have this really simple mission, which is how is it that we can drive the cost of data processing as close to zero as possible?”
- “Think of a CPU as a single-lane highway, right? You got one lane, and everything is funneling through one single lane. Think of the GPU as like a five- or six-lane highway.”
- “There’s a tremendous amount of value that can happen by doing things faster and cheaper.”
- “It’s really easy to integrate our stuff. Use us when you need us, and then point your data processing somewhere else.”
- “If you can figure out how to just take that people skill and leverage it, you’re going to have this exceptional career.”
Action Steps:
- Embrace accelerated computing: Leveraging GPUs over CPUs boosts data processing speed and tackles the challenge of handling large data volumes in the AI and big data era.
- Optimize data management strategies: An agile data processing infrastructure is essential for meeting SLAs and minimizing delays that could disrupt business operations.
- Integrate open-source technologies: Streamline data processing and adapt to new advancements without overhauling an organization’s entire infrastructure.
- Explore use cases beyond current scope: Businesses should revisit overlooked data analytics opportunities limited by past constraints.
- Foster collaboration between technical and financial leaders: Fostering collaboration between data scientists, engineers, and financial executives can lead to cost-effective data solutions and curb rising data processing costs.
Sponsor for this episode
At Rise25 we help B2B businesses give to and connect to your ‘Dream 200’ relationships and partnerships.
We help you cultivate amazing relationships in 2 ways.
#1 Podcasting
#2 Strategic Gifting
#1 Our Predictable Podcast ROI Program
At Rise25, we’re committed to helping you connect with your Dream 200 referral partners, clients, and strategic partners through our done-for-you podcast solution.
We’re a professional podcast production agency that makes creating a podcast effortless. Since 2009, our proven system has helped thousands of B2B businesses build strong relationships with referral partners, clients, and audiences without doing the hard work.
What do you need to start a podcast?
When you use our proven system, all you need is an idea and a voice. We handle the strategy, production, and distribution – you just need to show up and talk.
The Rise25 podcasting solution is designed to help you build a profitable podcast. This requires a specific strategy, and we’ve got that down pat. We focus on making sure you have a direct path to ROI, which is the most important component. Plus, our podcast production company takes any heavy lifting of production and distribution off your plate.
We make distribution easy.
We’ll distribute each episode across more than 11 unique channels, including iTunes, Spotify, and Amazon Podcasts. We’ll also create copy for each episode and promote your show across social media.
Cofounders Dr. Jeremy Weisz and John Corcoran credit podcasting as being the best thing they have ever done for their businesses. Podcasting connected them with the founders/CEOs of P90x, Atari, Einstein Bagels, Mattel, Rx Bars, YPO, EO, Lending Tree, Freshdesk, and many more.
The relationships you form through podcasting run deep. Jeremy and John became business partners through podcasting. They have even gone on family vacations and attended weddings of guests who have been on the podcast.
Podcast production has a lot of moving parts and is a big commitment on our end; we only want to work with people who are committed to their business and to cultivating amazing relationships.
Are you considering launching a podcast to acquire partnerships, clients, and referrals? Would you like to work with a podcast agency that wants you to win?
Rise25 Cofounders, Dr. Jeremy Weisz and John Corcoran, have been podcasting and advising about podcasting since 2008.
#2 Our Comprehensive Corporate Gifting Program
Elevate business relationships with customers, partners, staff, and prospects through gifting.
At Rise25, thoughtful and consistent gifting is a key component of staying top of mind and helps build lasting business relationships. Our corporate gift program is designed to simplify your process by delivering a full-service corporate gifting program — from sourcing and hand selecting the best gifts to expert packaging, custom branding, reliable shipping, and personalized messaging on your branded stationary.
Our done-for-you corporate gifting service ensures that your referral partners, prospects, and clients receive personalized touchpoints that enhance your business gifting efforts and provide a refined executive gifting experience. Whether you’re looking to impress key stakeholders or boost client loyalty, our comprehensive approach makes it easy and affordable.
Discover how Rise25’s personalized corporate gifting program can help you create lasting impressions. Get started today and experience the difference a strategic gifting approach can make.
Email us through our contact form.
You can learn more and watch a video on how it works here: https://rise25.com/giftprogram/
Contact us now at [email protected] or message us here https://rise25.com/contact/
Insider Stories from Top Leaders & Entrepreneurs…
Episode Transcript
Dr. Jeremy Weisz: 00:22
Dr. Jeremy Weisz here, founder of InspiredInsider where I talk with inspirational entrepreneurs and leaders. Today is no different. I have Craig Dunham of Voltron Data. You can check him out at VoltronData.com. And Craig, before I formally introduce you, I always like to point out other episodes of the podcast people should check out, since this is part of the, you know, SaaS series, some other interesting episodes are, we had one of the founders of Zapier on, which was a really cool episode and kind of seeing their trajectory. We had one of the founders of Mailshake on, a cold email platform, and they said they kind of talked about how they grew to 70,000 users. So that was interesting as well. And Pipedrive was another one at the time. I think they were 10,000 customers. And I think now they’re over 100,000. And he talked about the journey of, you know, brain tumor moving from Estonia to the US and just much more. So it was kind of a crazy journey for him.
And this episode is brought to you by Rise25. At Rise25, we help businesses give to and connect to their dream relationships and partnerships. We do that in two ways. One, we’re an easy button for a company to launch and run a podcast. So we do the strategy, accountability and the full execution and production. And number two, we’re an easy button for companies gifting. So we make gifting staying top of mind to your clients partners prospects easy. You hand us your address list and we do everything else. So Craig, we call ourselves kind of the magic elves that run in the background and make it as stress free as possible for a company to build better relationships and most importantly, focus on running their business. And for me, over the past decade, I’ve found no better way than to profile the people I admire and share with the world what they’re working on the podcast and sending them sweet treats in the mail so you can check out everything at rise25.com, or email us at [email protected].
I am super excited to introduce Craig Dunham, CEO at Voltron Data. He leads the, you know, the global strategy and execution. He’s really played a pivotal role in scaling multiple B2B SaaS companies, including Seismic and Lumar, which we’ll get into. And Craig has driven significant revenue growth, led M&A integrations, expanded businesses across multiple regions and industries. He’s got really a wide breadth of experience and knowledge. He holds an MBA from Kellogg School of Management – Northwestern serves on, you know, on the board of directors for the Humane League and LINK Unlimited. So, Craig, thanks for joining me.
Craig Dunham: 02:54
Jeremy, thanks so much for having me.
Dr. Jeremy Weisz: 02:55
Pretty cool pedigree.
Craig Dunham: 02:57
Thank you. Thank you. I try to keep myself busy and doing things that are meaningful and important. And so. And I loved having conversations like this, by the way. And so thank you so much for bringing me on.
Dr. Jeremy Weisz: 03:07
Definitely. Start off and just talk about Voltron Data and what you do. And for people listening there is a video component. So I’m going to pull up their website as we’re talking. So tell people about Voltron Data.
Craig Dunham: 03:22
Yeah. So I often get asked what it is and who is Voltron Data. And the shortest way I can explain it is we help enterprises accelerate computing with a really fast SQL engine. And we have this really simple mission, which is how is it that we can drive the cost of data processing or data analytics as close to zero as possible, and that’s it. And so we do that by thinking about the problem in a really different way. A lot of existing technologies today that do data processing, they’re leveraging CPU technology, which has been around for many, many years. And we’ve built a GPU accelerated engine from the ground up, which takes advantage of all the various accelerators that can happen with the GPU chip versus the CPU. And there’s accelerated storage and networking and memory and all these things matter a whole lot today, especially as we think about the world of AI and how that’s evolving and how computing needs to keep up to meet the needs of these AI applications.
Dr. Jeremy Weisz: 04:15
Yeah, I could see, you know, there’s so much more computing now with AI. And also, you know, especially because, you know, just the common people are using it now. So I would love to talk about some use cases so people can understand how does it work, right? You know accelerated computing. What does that mean? How does it work? I don’t know if you want to start with maybe a large retailer and how they’re using Voltron data.
Craig Dunham: 04:46
Yeah. And what I’ll do is I’ll give sort of a baseline overview and then I’ll tell a specific story there, but we’re at this really interesting inflection point in computing today. And, you know, this idea of like acceleration or things going faster than they had historically is becoming the standard. And so over the last couple of years, you saw this evolution that was all about how do I optimize my LLM or my large language model, and OpenAI ChatGPT the most famous of those. And that was the beginning of this evolution. And now what we’re seeing happening is there’s this shift towards agentic workflows, right? Where there’s this optimization that has to happen across a much broader swath of data. As you start thinking about how do I deploy these agents? These agents are doing and completing specific tasks or answering questions, etc.. And so in order to make those agents as effective as possible, you need to really think about acceleration across all of your data ecosystem in terms of how you store data, how you move data from one place to the other, how you run computational sort of analytics on that data and without that acceleration entirely across the stack becomes really difficult to have all the AI models or agents or whatever it happens to be, be as effective as as possible. And so, you know, one of the use cases that we often will talk about and it is, you know, it is that with, you know, with a, with a large a large retailer who is looking to do some inventory, let’s call it inventory optimization. And, you know, it’s one of the larger retailers in the world. And they’re trying to really predict what perishable goods would sell in all of their stores across the country on the next day. And the reason they’re trying to do this is that they want to make sure that, you know, they know what goods to ship from the distribution centers to the actual stores to make sure that everyone who wants to buy cabbage or carrots or whatever it happens to be, that they’re able to do so, that there are enough on the shelves. And so they have a bunch of raw data, which is all the transactional history of the sales that have been happening across these doors over a time horizon, and they would go do some sort of data processing overnight to be able to get things ready to feed into an AI model, which is how they would then use to predict things.
Dr. Jeremy Weisz: 07:00
Like if they say, okay, across all of our stores we sell, or maybe this particular store sells like 500 bananas a month because I’ve gone to like, target and they were out of bananas. I’m like, how is this possible? Because they probably didn’t use you and connected to the data points. You know, something like, you know, we sell like 500 of these a month. You know, I think we’ve ordered 300. We need to order 200 so that we have a supply in place, that kind of thing.
Craig Dunham: 07:30
That’s that’s right, that’s right. And so in this scenario, you know, they had a whole bunch of servers that are running these analytics and it would take them eight hours to run this, the full process from end to end. And so you can imagine, you know, if something happens during that eight hour time frame and the query fails or something like that, then all of a sudden they’re not meeting this SLA, the service level agreement to be able to get the data, to be able to predict what’s going to happen in the next day. And so it just didn’t leave any room for error, any room for de-risking. And, you know, leveraging our technology, suddenly they’re able to take this query job that would take eight hours to run. And they shrunk it down to about 30 minutes. And so now you can imagine what happens is, well, if something happens to fail, they can run it multiple times. But more importantly, they can start to optimize the model now to get even better results, because they can run it multiple times and see what’s working, what’s not working, and apply some learnings. And so there’s just a tremendous amount of value that can happen by doing things faster and cheaper. And you know, we always say faster, cheaper, more energy efficient. Those tend to be our sort of themes that we run on and we try to help our customers with. But, you know, we’re talking tens of millions of dollars in savings as a result of being able to predict this more accurately.
Dr. Jeremy Weisz: 08:50
Yeah, that’s what I was going to actually piggyback on, which is like, if that’s going from 8 hours to 30 minutes, then there’s a huge time. But it’s also an energy savings, like running these machines. Obviously there’s a speed to it. What are the kind of advantages of cutting that down? I mean, the process in general.
Craig Dunham: 09:15
Yeah. So, you know, the base of our technology and I talked about a GPU versus a CPU just just a moment ago as I gave the intro. But GPUs by nature do use more power. And they are and they are more expensive on a 1 to 1 basis. But what we do at Voltron Data is we are able to optimize the GPU so efficiently that you could, in a sense, in, you know, in an ideal setup, of course, you could reduce 100 CPU servers to one GPU server. And so again, while the GPU is more expensive and uses more energy, the ratio of savings from 100 to 1 is meaningful, right? It’s not that much more energy that’s used by a GPU versus a CPU. And so we’re really about shrinking, you know, the land, shrinking the energy, shrinking the amount of hardware that is required. And think about all the associated costs that come along with that.
Dr. Jeremy Weisz: 10:10
For people who don’t know GPU, what that is or what that means. Can you explain briefly?
Craig Dunham: 10:17
It’s graphical processing unit. Historically, they’ve been used in the world of gaming back in the day. And the idea that we use GPU and they were and, you know, at the actually at its most fundamental level, you know, think of a CPU as a single lane highway, right? You got one lane and everything is funneling through one single lane. Think of the GPU as like a 5 or 6 lane highway. So there’s this is like parallel processing nature that can come with a GPU. But originally this was used for gaming, and Nvidia sort of made its name in and around sort of this idea of GPU for games and other companies as well. And the idea that you can use GPUs for AI then became the next sort of big major application. And, you know, for us, we said, well, actually, if you’re running all of these models and LMs leveraging GPUs, the data analytics component further downstream, which feeds the data in to make those models more efficient, that’s all being run on antiquated technology called CPUs, right? The single lane highway. So imagine if you have a single lane.
Dr. Jeremy Weisz: 11:19
Would you rather be driving on a single lane or five five lane type of thing?
Craig Dunham: 11:23
Exactly right. So think about the bottlenecks that that causes. You can’t feed your models fast enough because you got this single lane highway suddenly feeding out to the six lane highway. Right. So we want to make that all more efficient.
Dr. Jeremy Weisz: 11:32
Talk about this. And you mentioned Nvidia here. I know your company did a post on this right here. Nvidia Spark-RAPIDS hits the wall. Can you talk about some of the concepts in this?
Craig Dunham: 11:45
Yeah, it’s a similar I mean, it’s a really similar idea to what I, what I just described. And so, you know, the thing that has happened is, you know, we’re not the first ones that have said, hey, look, you know, GPUs are, you know, can be more efficient than CPUs for, you know, a number of different, a number of different tasks. But what is happening in the industry is that people are retrofitting their technologies to now be able to run leveraging GPUs, whereas we have built ours from the ground up. And so imagine you have an application or a technology that has existed for five, six, seven years and you’re going to say, hey, I’m going to retrofit this now because I know that I can use GPUs really well, and it’s going to make it better. Sure, you’re going to get some uplift, but we have built ours from the ground up very purposefully to take advantage of every bit of compute efficiency that you can get with a GPU versus a CPU. And so it’s ground up fully. We call it accelerator native. And so it’s really fast storage and really fast memory and really fast networking, sort of how data moves back and forth. And so we’re all about how do we fully optimize every bit of computing power that exists inside of a computer or a server or whatever it happens to be? And with this Spark Rapids example that we gave here, it’s, you know, it’s Nvidia’s own technology and they’re even retrofitting this to run better on a GPU, whereas ours is just the GPU native from the beginning.
Dr. Jeremy Weisz: 13:16
You know, in the example of the large retailer. Right. So can this, the Voltron Data basically plug into any system. So like they go in, are you contacting them? Are they contacting you and be like, what issue are they coming to you with? It’s just that processing time, or are there other things that they’re coming to you and go, hey, I heard you could solve this processing time issue.
Craig Dunham: 13:43
Yeah. So we I mean, we, you know, it’s a combination. It’s a combination of both. And so there are times obviously we have, you know, a hypothesis and enough data that we feel really confident on who and what is our ideal customer and what that person looks like and the types of problems they have. And, you know, those problems typically start to center around, hey, I’ve got, you know, a large, massive, you know, call it petabyte scale set of data. And the cost and speed and complexity of managing that data is just really high and only getting higher every day. As you think about the amount of data that’s being generated. I mean, everything we do from our phones to our cars to our, you know, technology at home, everything is kicking off log data or telemetry data, etc.. And so you can imagine the amount of it is just continues to grow. And then we think about our customers in a sense of having these problems where it’s okay, I’ve got to do something with that data, which typically means I need more hardware or servers to run the data. I need more land to then put those servers on. I need more energy to then power those servers. Right. And so there’s the problem of how do I manage these massive data centers. And then what’s happening as a result is they’re building out these systems. And like the example I gave with the retailer, there are these SLAs which says, well, I’ve got to get this data processing workload, this, you know, this, join this data, aggregate this data, run mathematical computations on this data. I’ve got to get it done within a certain window of time. But my infrastructure can’t keep up with that SLA. And so then they start downsampling data which means well, actually instead of running the analytics on the full set, I’m only going to run it on a portion of the data, and then I will maybe use that and extrapolate and get to some outcome.
Dr. Jeremy Weisz: 15:20
Because there’s too much time consuming, it’s too much, too much money to build more centers. And it’s like, who cares about the bananas? We sell more electronics. Let’s just do it in the electronics section of Target and forget about the bananas. Like, who cares if we sell like a 79 cent bunch of bananas. But it’s a bad customer experience in the end. Who’s the person like who’s the position of these companies that are thinking about this, that are worrying about this, that are coming to you for this?
Craig Dunham: 15:49
Yeah. So historically for us, it’s been, you know, data scientists, data engineers, you know, data platform leaders. So a more technical leader. But what’s interesting, Jeremy, actually, now is we’ve been testing some messaging recently, and we’re getting a lot more uptake from CFOs or folks in the finance department, because now they’re starting to look at, you know, costs ballooning out of control.
[Continue to Page 2]