#92 - Three Breakthroughs: A Survey of the AI Landscape
Sam:
Welcome to Tech Optimist where we don't just talk about the future, we break it down, analyze it, and watch it unfold in real time. I'm Sam a producer and host of this show. This week we've got three massive breakthroughs shaping up the AI world. A Chinese startup builds a model that rivals OpenAI on a shoestring budget. The US government bets big on AI super computing with a half trillion dollar initiative, and OpenAI launches Operator, the first AI assistant that doesn't just respond, it takes action. The stakes are high, the innovation is faster than ever, and the competition is getting fierce. So buckle up. This episode is all about power, progress, and the next wave of AI dominance.
Speaker 2:
Do you have a venture capital portfolio of cutting edge startups? Without one, you could be missing out on enormous value creation and a more diversified personal portfolio. Alumni Ventures ranked a top-twenty VC firm by CB Insights is the leading VC firm for individual investors. Believe in investing in innovation. Visit av.vc/foundation to get started.
Sam:
As a reminder, the Tech Optimist podcast is for the informational purposes only. It is not personalized advice and it's not an offer to buy or sell securities. For additional important details, please see the text description accompanying this episode. If you thought last week was a big one for AI, buckle up because this week just changed the game.
Mike:
Hi, welcome to Alumni Ventures and our three breakthroughs discussion. I'm here again with Lucas again this week. Wow. I mean, you always think like, oh, it's been a busy week in technology and news and et cetera, and yeah, a lot going on.
Sam:
All right, a Chinese AI startup just made the entire industry do a double take with DeepSeek a model that's proving you don't need billions of dollars or an army of GPUs to build something groundbreaking. They built a model that rivals OpenAI's best with a fraction of the computing power and budget, but was it innovation or imitation? OpenAI has accused them of stealing GPT's intelligence, sparking a major controversy, which we're going to get into a little bit today. Then we turn to Project Stargate, a $50 billion initiative that's making waves in Washington.
This is the US government's biggest bet yet on AI aiming to build out computing infrastructure at an unprecedented scale. But does this signal a new AI arms race and what does that even mean? Finally, we're talking about OpenAI's Operator. We touched on this a little bit last week, but it's the first AI assistant that takes action instead of just answering questions. Imagine a future where AI books your flights, orders your groceries, or even manages your schedule all on its own. It's not perfect yet, but it's the clearest sign of where AI is headed. So these aren't just cool tech stories. They're shaping the future of global competition, infrastructure and automation. So let's jump in and unpack what it all means.
Mike:
So we got DeepSeek, Stargate, Operator, et cetera. I think this may be all AI, but let's start with DeepSeek, right? So the world had kind of a stroke and this little Chinese company came out with a model which had built, which people are very impressed by and kind of giving it away open source and built it a new and better way. And so I got my points of view. Lucas, what are your bullet points on it?
Lucas:
Yeah, I mean, just to go back from the top, it has not been a slow news week, but this definitely takes the crown for the week and everybody seems to have lost their dang minds over this thing. I think the most intriguing aspect is DeepSeek's model efficiency. Unlike the Western AI labs that spent hundreds of millions of dollars to train their models on massive, massive GPU clusters, DeepSeek managed to achieve competitive results just using a couple thousand GPUs over a two-month period on a budget of $6 million. This is what they're saying in their marketing materials. So it raises really critical questions. Have we overestimated the cost of achieving state-of-the-art AI? And if so, what does that mean for the future of AI investment and development?
Mike:
Yeah, listen, I mean, you can go a million different directions with this just to bat the tennis ball back a little bit. There's definitely this points to the fast follower business strategy, which again, with the long arc of venture capital, I've seen being a pretty good one. I mean, Apple has usually not been number one into the market. It's a lot easier to be a fast follower, right? Kind of thing. So I think that's one thing. I personally was, oh, you're surprised that China is going to compete in this space and that innovators and entrepreneurs aren't going to deal with constraints and find new and better ways. That's naive, right?
And I am like competition is good. The fact that we've got Google and Meta and Elon and other players pushing each other to make AI better I think is fantastic. I mean, as a venture capitalist, we weren't interested in taking on Google in LLMs, right? I mean, I think you want to invest in businesses who benefit from this competition, who own a customer, own a data set, own a niche, know a problem better than anybody in the world, and each innovation, each breakthrough, each... Let these guys be in a multi-billion dollar bloodbath and it's going to help my business, and I'll just speak as Alumni Ventures as a business itself. We're going to use the best models to solve the problems that we want to solve, period. Bring the competition on is my point of view. And then my last thought and then I'll let you bat it back to me, Lucas.
Lucas:
I'm excited to volley the ball back over.
Mike:
Yeah, because this is also a case of a lot of things in the human experience is like, oh, when we have this much energy, we'll have enough. When we have have this much compute, this'll be enough. All of the great ideas have happened. This market, it's too late now. I only wish I'd have been around 10 years ago in venture capital and innovation because, and I've just again seen this song over and over again, which is take Zuck after DeepSeek, he's announcing again a multi half the size of Manhattan facility that is like, this is, yeah, could these things be more efficient? Can you do more with less? Great. That's just going to add to what these huge investments are going to be able to deliver to the world. So, your stroke.
Lucas:
I think those points are all very valid. Competition will always spur more innovation to the benefit of enterprises and consumers. That's what we believe as tech optimists. But I do think that, so there are a lot more stories coming out today, and DeepSeek at least as of a few hours ago, I don't think they've actually officially responded to this, but there's a lot of talk about the use of distillation, which in AI, it moves away from fair competition toward questions around are we ripping off another company's IP. Distillation in AI is this technique where a smaller, more efficient model is trained to replicate the behavior of a larger, more advanced model. So instead-
Mike:
It doesn't exist unless you have done the work on the big ass model, right?
Lucas:
It's like a junior chef learning from a master, and rather than spending years of experimenting with every ingredient and technique, the apprentice just closely observes the master's process, makes those same dishes and opens a new restaurant. If I were to bastardize the situation a little bit. [inaudible 00:10:10]. And you might say, okay, well what do I care as a customer if DeepSeek, if that's what they did, and now I get the same model for cheaper, that's to my benefit, but we need to see the repercussions of this. OpenAI is now accusing DeepSeek of really crossing ethical and legal boundaries by using distillation in an unauthorized way. Essentially, they're accusing DeepSeek of scraping ChatGPT's responses to then train its own model.
Mike:
By the way, I think that's exactly what they did.
Lucas:
Yeah. And it's in all of the user agreement language that you cannot do that. I don't think we should be surprised that a Chinese own company was perhaps engaging in that type of [inaudible 00:10:58]. But we need to see kind where this goes and it raises really interesting questions for Project Stargate. So I don't know if you want to, I can volley the ball.
Mike:
Let me respond to that, which is... Yep. And this is again, we've talked a little bit about doing hard things and having, the simple example of hardware and software, both hard. The ability to, I'm going to say shortcut software development versus take what Apple has done over decades with you work the entire stack layer hell of a lot harder than anyone. I mean, again, this is classic Clayton Christensen, right? About the value chain and the value stack and people kind of taking one particular slice and commoditizing the business. That is exactly what has happened here while the race is to vertically integrate and if vertical integration can give you a competitive advantage, which is really hard to do and takes a really long time, but if you can achieve that, somebody's not going to rip you off in the same way that it seemingly has happened here.
So I think when you look at and listen, these are big boys with big pockets who are super smart, and it's the same reason that Sam's getting into robotics and wants to get into chips and wants to, they're all wanting, they all know that this is the playbook. So for me, there's a little bit of a, gosh, I can't believe how surprised everybody is. Well, that may be just the, we live in this world every day and we have seen this playbook or I have for 30 years. So for me it was of course, where the average retail investor who bought NVIDIA stock and sees a correction, their perspective may be a little different. Let me leave it at that.
Lucas:
Yeah, it's such a good point. You're kind of getting at the heart of, even if OpenAI was not open sourcing their model, they knew that there's still a long road ahead for building the large sustainable moats and that's something that you just elaborated on that on the investing team we wholeheartedly believe in. We've wrote a check into a company called Opal that OpenAI invested a lot of money into, and they're trying to use this company to be kind of the hardware interface for interacting with the model when it's not on mobile and desktop. What does that hardware interface look like? They know this needs to be a full hardware and software stack kind of integrated to form that more defensible moat.
Sam:
So let's break this controversy down a bit more. DeepSeek recently launched an AI model that performs at a level comparable to ChatGPT, but here's the kicker, they reportedly developed it for just $6 million, a fraction of what US AI firms are spending. This has left many wondering how did they do that? OpenAI claims they have the answer, model distillation, a process where an AI is trained by mimicking responses from an existing model while widely used in AI development, it violates OpenAI's terms of service. If done used in their proprietary data, US officials have weighed in with some suggesting DeepSeek may have unfairly leveraged US technology to create a dirt cheap AI model. But this controversy is just about ethics. It's also shaking up the tech industry and the stock market. After DeepSeek's model went public, major AI stocks including NVIDIA, Microsoft and Alphabet, saw dips signaling concerns over cheaper, more competitive AI alternatives.
Then there's the issue of content control. Tests show that DeepSeek handles politically sensitive topics differently from ChatGPT, particularly when it comes to China, human rights and global conflicts. Some argue this highlights the risks of AI bias and censorship, especially as open source and closed source AI models continue to evolve. So what does this mean for the future of AI? With the rise of faster, cheaper competitors will AI development shift toward cost efficiency rather than raw power? And if companies like OpenAI struggle to protect their intellectual property, what does that mean for innovation? These questions aren't just theoretical, they're shaping the next phase of this AI competition.
Lucas:
But I'm really interested in what happens next with Project Stargate.
Mike:
Yeah. Let's talk about that a little bit. Again for our listeners, this was the big announcement in the Oval Office kind of two days or three days after the inauguration, which was a big cluster. And then there was some back and forth about the number that got used and how real that number is. And then again, headline numbers are headline numbers and they're meant to be attention grabbing and this, that and the other thing, again, I would put a lot of discount into the details of it. You can break it apart like what's first year, second year, third year, how much of this is going to be done using deep pocketed venture debt, I mean we have big players, BlackRock, et cetera, who have basically said, "hey, we're here to loan money against these kinds of things." So how much actual equity is getting put forward? It's not anywhere near 500 billion in year one for sure. Again, [inaudible 00:17:35] is a big picture folks. What this to me again is just the pattern, which is big money.
A lot of groups are building a lot of these enormous GPU clusters and there's game theory involved. Again, you want to listen to Zuck being interviewed by Rogan about it and there's game theory involved. These guys all know game theory, they all know prisoners dilemma problems, and with all of that, they think that this is a game they have to play and that they want to play and that there is some non-trivial possibility that this is the last big game that we are all going to play on this dimension.
When software becomes smarter than humans and can do things better than humans, that's never happened at scale in the human experience. So all bets are off, so they're all betting the firm. And so it's very fun and interesting to watch and this won't be the last big cluster to be built. And I tend to be in the camp of for this kind of stuff, compute, again, take it from an old man who had the original Mac sitting over here, which I think is 128K in it, and it's like they came out with 512 and it's like, who would ever need 512K in a computer. [inaudible 00:19:28].
And it's like humans, we're just wired for more and we will find uses for more and we will always reach for more and higher and bigger. And if you build compute, we will use it. And if you're able to produce energy, it will be consumed. That would be my bet. That's where my point of view on this stuff is. But Stargate big, I think it's interesting the government's in the game at least showing support, I think related to it is, I think the dialogue is, what are the bottlenecks to these things? You've got clearly the chips, you've got memory issues, you've got energy issues.
And energy to me is a stickier one because we've had 40 years of kind of energy stagnation. And I do think unintended consequence of DeepSeek is a wake-up call that we don't own this market forever as a birthright. And if we are going to compete, we're going to have to continue to innovate. We're going to have to do things, and we can't be hamstrung by regulation that's going to prohibit us from building the energy we're going to need in order to power these things. And the choice is we will lose. And if you lose in this game, you're playing a very dangerous game.
Sam:
All right, we're going to pause the conversation here and hop into an ad. Don't go anywhere.
Speaker 5:
Hey everyone, just taking a quick break so I can tell you about the AI fund from Alumni Ventures. Alumni Ventures is one of the only VC firms focused on making venture capital accessible to individual investors like you. In fact, Alumni Ventures is one of the most active and highly rated VC in US, and we co-invest alongside renowned lead investors. With our AI fund you'll have the opportunity to invest in a portfolio built entirely around advancements in AI. This fund consists of 15 to 20 investments in multiple fields where AI is making a huge impact, including areas such as machine learning, healthcare, education, transportation, and more. To get started, visit us at av.vc/funds/aifund. Now, back to the show.
Lucas:
I think all of that is right. I think this is an investment in brute force scaling. When Stargate represents a bet that scaling up AI hardware to an extreme degree will unlock new levels of intelligence, pushing OpenAI closer and closer to its vision, and not just OpenAI, but the other players that are involved here, Microsoft as OpenAI's lead backer, Oracle, the US government, just closer to a vision of machines that can reason and plan and learn like humans. And that has been OpenAI stated-
Mike:
Yeah. Let me put a little refinement on this discussion too, which is I think there's a perception, again kind of related to DeepSeek, that all of these data centers are about training data and getting to the next level of capabilities. But Jensen, there's the inference side of this equation, which is the chewing of problems and solving of things. Once you've trained the model, it's like the doing stuff. Jensen said that market is a billion times bigger than the training market, a billion. And so my view of this is these data centers, you can't build them fast enough. They will be put to use. It may not all be training stuff. These things are compute centers, these are horsepower. The horsepower can be applied to a lot of different things fairly flexibly over time.
So we're building Stargate to do X, I think is a very narrow short-term perspective on the building of these data centers. So I think we're going to need data centers to do old-fashioned stuff. We're going to need data centers to do training and learning and reasoning and optimization and all this kind of stuff, and then we're going to need data centers to do shit. And that's kind of the third bit of news this week, which a little bit is overshadowed, which is what I think is the first consumer-grade agent, which is Operator from OpenAI, and I played with it.
It's slow, it's janky, it's a view into the future. So again, a little bit of magic. And again, insert Asimov quote here, but you can see where this is going, that it's just very simple thing, which is I'm going to Napa, I got two couples, here are the dates, here are the restaurants in order where I want to go, go to town, and I go back to my day job. And over here on the right out of the corner of my eye, it's logging in, it's reviewing dates, reviewing screens, occasionally prompting me like, hey, no times, can you go later? Oh, we have one. Do you want me to do it? Do you want to do it yourself? If it can do that, and then just magically watching it, scrolling, swiping, clicking, filling in dates.
Again, this is McPaint and it's 1987 again for me, which is like you can see the future and the future is really exciting and with DeepSeek and Stargate and noise in the environment, this is a big deal because when computers can start acting with us to do stuff, a lot of the world, our world, the world of the listeners is basically on computers moving stuff around. Personally, professionally that's a big part of the human existence in 2025. And if we have somebody smarter than us helping with that, that has big implications for society in the world. So it's not there yet, but if you get online, you see some people playing with it, and again, this is absolutely going to come out from DeepSeek at some point.
Lucas:
Yeah, absolutely.
Mike:
There's totally going to be the DeepSeek version of Operator soon. Now, again, I think people need to decide, do I want my data with DeepSeek and my credit card information with DeepSeek making reservations for me? I personally have been, I'm pretty happy and I'm pretty full out keeping up with what's going on at Gemini and Perplexity and Claude and... That I'm not hustling over to spend a lot of time on DeepSeek right now. I think I know what I'm going to get. I'm going to get a free almost as good version and with the Clone company, and I am very skeptical on a lot of things, but yeah, when in doubt, I'm not going to share a lot of private information with an entity that is operating like DeepSeek is operating, but again, that's my personal choice, knock yourself out world.
Lucas:
I think that's largely considered a best practice for tech hygiene. I think for Operator, we've talked about this a couple of times already, OpenAI just continues to be at the front of the pack for bringing top-notch AI consumer products or top AI products to the consumer as people who focus a lot on the enterprise. I'm excited about more of the enterprise applications, but OpenAI's strategy has always been to demonstrate how this works for the consumer-
Mike:
Kind of the Apple.
Lucas:
As a gateway to a bigger market.
Mike:
They don't invent it necessarily, and you can make an argument that, again, there's ways to do some of these Operator things with Claude, but again, tricky, fussy, technical requirements...
Lucas:
But even in OpenAI's release strategy, it's through every part of their company. Their product marketing team knows that this is their strategy, and so in the release video, it's showing how a user would shop on Instacart using Operators, so it tells it, this is what I want to cook tonight. Here's what I already have in my fridge. Go load up a cart for me. That is an ultimate consumer application. You mentioned travel before. For about a year now, I've been kicking around ideas with my team saying ever since the first ChatGPT release, people have been excited to have an AI travel agent, but nobody's really done that well yet. This is the first real foray into it. I'm also happy that we haven't placed a bet in there because it seems like OpenAI is going to win that bet.
Mike:
Yeah, it seems like Operator-like things are going to just do it for you, which is again an interesting, do you need a special interest tool or at the end of the day, do you want to be the airline that is compatible? This is the other thing, I was looking at flights with Operator and it's like certain sites seem to be more AI compatible than others, so I think, the world kind of evolved to work with Google. So if I'm going to have a website, SEO, how do I make this work with both consumers and search and those kinds of things? I think there's an analogy here of, if you're solving a problem in the world, you better be thinking about how I work reasonably well with AI. Now, AI is smart, they're going to figure it out and bang your site, your company and eventually crack it, but I think there's got to be an advantage, just make it easier for the AI to... Make it easier for an agent to work with you, probably makes it easier for a human to work with you too.
Lucas:
Right? There's one other dimension through which I like looking at the Operator release where it's probably a couple months ago I was reading an opinion piece on how people use to ChatGPT and for us, it's obvious on our team, on the investing team, it's obvious, I mean OpenAI has increased our productivity tremendously, but it was an op-ed written by a more artistic person, and I think she says something really insightful, which is, I don't want AI to help me read and write so that I have more time to do the dishes and all the other annoying things in my life. I want my AI to do the annoying things in my life so that I have more time to read and write, and I think that it's a really interesting consumer perspective, and with the release of Operator, we're finally getting to see a world where you can offload some, I mean, not doing dishes yet because that's more on the robotics front that we had talked about before and it is coming at some point.
But this is the first foray where we're like, oh, I could unlock a lot of really annoying things and get more time back in my day, not just for the productive things, which are what you and I care more about generally, but also for the things in my life that I love, more time with family, more time for reading and writing, more time for going out into the world, and I think that we lose that thread in the discussion quite a bit. Not you and I, but just anybody who talks about AI.
Mike:
Listen, I think it's a very insightful point, Lucas, that I think it's really important to remind our listeners that that is really the purpose of this stuff, right? But you have to take control of that because these tools can actually work the opposite. These tools, a recent example is Zoom, which is a productivity tool, so we could do a Zoom call so I don't have to fly to Chicago and have a two-hour meeting and fly home. That's a productivity and release, but did people use that to work on their health? Were their relationships, their creative pursuits, their innovative pursuits, the things that they want to do, or are they just going to use these productivity things to work more.
Or that may be your goal, but do it explicitly, right? Kind of thing. I do think humans very easily can get sucked into that, and if you don't manage your life and health and relationships and what are my priorities and what are my goals and those big, deep things, AI can be a huge leverage point to get you more of what you really, really value or not, but you need to be aware and conscious, which humans don't do a lot. We just get kind of sucked into stuff and you turn around and they're not where you want to be, but this will be another place where it's like, how can this tool... Say you value reading, how can I use this tool to read more and better? It can absolutely help you do that, right?
It can summarize, an AI can analyze what you read and give you more of what you might read, it might like to read or might find interesting. You can have a conversation with AI on the way home from work about something you just read in your experience from reading. It can be deeper because you have a 724 always ready, always supportive book club, so you have to get the kind of personal first principles when you have these powerful technologies. Hey, another great conversation Lucas. I hope our community got a point or two out of it and we'll do it again next week.
Lucas:
Thanks for having me-
Mike:
Maybe it'll be a slower week.
Lucas:
Yeah. Maybe we'll have one story next week that isn't directly about OpenAI.
Mike:
Yeah. Okay. Be well.
Lucas:
Take care.
Sam:
Thanks again for tuning into The Tech Optimist. If you enjoyed this episode, we'd really appreciate it if you'd give us a rating on whichever podcast app you're using and remember to subscribe to keep up with each episode. The Tech Optimist welcomes any questions, comments, or segment suggestions, so please email us at info@techoptimist.vc with any of those and be sure to visit our website at av.vc. As always, keep building.
