Building the Infrastructure for the AI Age

Nov 27, 2023

As in every tech era, pick and shovel makers are building the tools to unleash the potential of generative AI.

When the history books are written about the rise of generative AI, makers of foundation models like OpenAI, Anthropic, and Midjourney will have prominent roles. But no technology revolution happens without a cast of companies that provide the underlying infrastructure and tools that make it possible for businesses to put world-changing inventions to use.

One of those companies is Databricks. The 9-year-old company pioneered the data lakehouse approach for storing and cleaning all kinds of structured and unstructured data, and now hundreds of companies are also using its platform to create and run their own foundation models. “Since ChatGPT was announced last November, everyone has realized that AI will impact every job, every industry, and every part of our lives,” says founder and Chief Executive Officer Ali Ghodsi. “But the companies that build these applications are going to need the picks and shovels and other infrastructure to do the job.”

In fact, infrastructure companies often end up becoming the biggest winners in times of technology disruption. Microsoft, Intel, and Oracle provided the foundation for the PC revolution. Cisco, Sun Microsystems, and Qualcomm did the same for the Internet. When businesses migrated to the cloud, they used AWS, Azure, and Google Cloud. Along the way, companies such as Cloudflare, MongoDB, VMware, and many others have carved out large, lucrative infrastructure businesses.

Databricks is one of the first infrastructure companies of the artificial intelligence era to make that list. Sales, which topped $1 billion in June, are growing rapidly, setting the stage for the privately held company to raise $500 million at a valuation of $43 billion. That’s a sign of the historic opportunity awaiting companies that build the tech stack for the generative AI era, says NEA partner Aaron Jacobson.

“The bang for the buck on every dollar invested in AI infrastructure will be far more than in past eras, which means the potential for this generation of companies could dwarf anything in the past,” says Jacobson. “That’s why we’ve already seen a trillion-dollar infrastructure company created on the back of AI, in Nvidia. Intel was never a trillion-dollar company despite powering the PC and cloud revolutions.” Privately held OpenAI’s valuation is nearly $30 billion, more than Ford Motor Company.

An unprecedented upgrade

Even among tech infrastructure booms, this one stands out. For starters, companies of all sizes and in every vertical industry are embracing generative AI with astonishing speed. “Every customer I talk to — and I mean every customer — wants to leverage generative AI in a strategic way,” says Databricks’ Ghodsi.

And business leaders are under pressure to move quickly, partly because enough of the basic infrastructure is already in place for them to get started. “It took five or 10 years for the Internet to take off because the infrastructure wasn’t all in place, but now the Internet is fast, it’s everywhere, and it just works,” says Ghodsi. Rather than have to roll out fiber networks and build data centers, “this time it’s more about adding software capability to the underlying infrastructure. That can happen a lot faster,” he says.

Yet this infrastructure overhaul will be more radical in other ways. The shifts from the PC to the Web to mobile to the cloud were essentially about changing where software applications and services run and how they could be accessed by customers (for example, from licensed software to SaaS). But generative AI is revolutionizing how software is developed in the first place. Rather than provide tools for trained technologists to convert good ideas into products, infrastructure for building with artificial intelligence is becoming accessible to non-technical people as well, or even no people at all.

“To me, the ability for ChatGPT to spill out sentences that look human-generated is almost like a parlor trick,” says NEA partner Greg Papadopoulos, who began his career in the 1980s working with seminal AI pioneers such as Thinking Machines. “The bigger deal is how what we say or write can directly generate computer code.”

The process has already started. Many programmers now rely on GitHub Copilot to write lines of code for them. And many elements of the emerging tech stack are not only optimized to the needs of generative AI, but also use generative AI to make them easy to use.

“For the first time, you can think about anthropomorphizing the development of technology,” says Bob van Luijt, CEO of Weaviate. The company makes a “vector database” that stores information in a different way than traditional databases so that all information related to a prompt can be organized for easy access. You don’t need to know how to write perfect, error-free queries for an SQL database or for MongoDB or Elasticsearch (without any misplaced periods or asterisks). Simply ask for the information you want with natural language prompts.

This isn’t just more efficient; it will also make technology far more useful. Today, members of Weaviate’s customer success team start their day by looking at the overnight performance stats to see if any customer is having a problem. If they see a spike of some sort, they go fix it, says Van Luijt. “But what if they could wake up and ask the system, ‘OK, what happened last night’ and the system responds, ‘Oh, Acme Co had a problem at 2 AM, but I fixed it.’ It sounds like science fiction, but it’s becoming reality.”

This dream of an anthropomorphic infrastructure is by no means a sure thing. “The pessimistic view is that generative AI is just the latest chapter in the evolution of infrastructure,” says Van Luijt. “But needless to say, I’m an optimist. I think this is the start of a brand new book.”

Whether it's a chapter or a new book, the shift to AI and machine learning will require a major upgrade in the sophistication of businesses’ and other organizations’ IT operations. They’ll need new developer tools to write software and a new software infrastructure for managing cloud-based applications, says Luke Hoban, CTO of Pulumi. “Most of these AI capabilities will be delivered via the cloud, and the scale and complexity required to build and run all the new services will be unprecedented. That’s why there’s a gold rush to help companies scale up their cloud software engineering tooling and development practices.”

To that end, Pulumi makes infrastructure as code products that use AI to let software engineers use whatever programming languages they like to stitch together a growing stew of new AI services from the various cloud service providers. Through a new generative AI interface users can offload jobs similar to the way GitHub Copilot helps programmers write code. “I don’t know if it’s 20% or 40% or 200%, but we’re going to make these people a lot more productive,” he says.

Meet the new stack

So what will this new stack look like? No doubt, not all of it will change. Some incumbent infrastructure providers are a lock to be major beneficiaries. Nvidia is the most obvious example. Its market valuation has quadrupled since ChatGPT was released last November. The big cloud providers will also be huge winners. Given the exorbitant cost of GPUs and servers required to run foundation models, up to 20% of all dollars spent on infrastructure could be spent with Amazon’s AWS, Microsoft’s Azure, or Google Cloud Platform.

But entrepreneurs that move quickly to solve key infrastructure issues related to generative AI have a historic opportunity to create new categories. Metronome is one such example. The company offers a billing platform designed for the complexity and scale of usage-based pricing —the dominant monetization model for generative AI. Companies are using Metronome to power the underlying infrastructure for their usage-based, subscription, and hybrid pricing models.

Metronome is reducing the technical overhead and operational management of usage-based pricing and counts AI giants including OpenAI and NVIDIA as customers.

Other categories are rapidly taking shape. Research firm CB Insights says VCs have invested $11 billion in 17 AI infrastructure startups since the middle of 2022. That’s almost four times as much as has been invested in AI applications, according to the study. And at an average of $650 million, investors are clearly hoping these entrepreneurs will move boldly to grab the pole position in these markets. Here are a few key categories:

Data analytics platforms

The phrase “data is king” came into vogue in the Big Data craze of the 2000s, and with the arrival of AI, this concept has been put on steroids since data is now 10 times more valuable. Companies such as DataRobot and Snowflake, along with Databricks, are creating platforms to make the most of it.


DataRobot, Weights & Biases, and Seldon offer platforms so companies can track, manage, and streamline the development and deployment of AI models. Arize and TrueEra offer tools to help AI teams monitor their models and troubleshoot them when things go awry.

Data quality and labeling

ScaleAI and SnorkelAI make data labeling tools.

Vector databases

Several AI startups, such as Weaviate, make databases that store a new type of data called a “vector” that can represent almost anything: an image, a summary of a document, or a concept. When a person enters a prompt, the vector database gathers all potentially relevant information in close proximity (aka vector embeddings) for faster, more efficient retrieval when subsequent prompts are made.


NeuralMagic has created software that helps traditional CPUs run generative AI applications far more efficiently so they can complement or supplant GPUs, which have become scarce in supply.


The Granica AI Efficiency Platform reduces the cost to store and access data, while preserving its privacy to unlock it for training. Vast Data has a new approach to storage optimized for AI.


With demand and prices for Nvidia chips soaring, startups such as Cerebras, Graphcore, Groq, Kneron, and SambaNova are creating new chips specifically designed for generative AI. SambaNova makes customizable silicon, along with tools to develop models for particular vertical markets.


Hiddenlayer, Protect AI, and Cranium are building solutions to help enterprises understand and secure how AI is being used within an organization. ForAllSecure is leveraging AI for in-depth analysis of application code to discover unknown vulnerabilities.

Open out of the gate

Every major infrastructure makeover has a sub-theme: the degree to which it will be based on proprietary technologies or on open source standards. As with most cycles, the proprietary powers tend to dominate early on, as inventors of breakthrough technologies look to maximize the return on their investment. Despite its name, OpenAI’s software is not open source. Neither is Alphabet’s Palm2 or Anthropic’s Claude.

But open source is increasing in popularity earlier than it did in past infrastructure cycles, says Papadopoulos. “This is a hyper-accelerated time, so we’re seeing a lot of competition and innovation that is open source earlier in the process.” Even Llama2 from Meta is practically open source, he points out. “Just as you can download a database and a webserver to launch a website, it’s going to be the same thing with generative AI.”

Databricks is one of the companies that is riding this wave. The company recognized early on that suppliers of proprietary technologies would not be able to support all of the possible ways organizations would want to leverage generative AI — and that businesses would not want to share their precious internal data to improve the performance of AI systems used by their competitors. With an open source alternative, businesses could instead create their own foundation models, trained on that data.

“Open source models may not be as cutting edge as GPT-10 or whatever gets released down the road, but they will be more customizable, much cheaper, and probably good enough for a lot of use cases,” says NEA’s Schoen. He notes that Databricks recently acquired MosaicML, an OpenAI competitor that makes a platform that customers can use to train their own large language models, based on their own data.

A golden age for systems thinkers

The entrepreneurs who start successful pick-and-shovel companies tend to be a different breed. Van Luijt calls them “systems thinkers” who are motivated by figuring out a product that accomplishes something difficult within a massive ecosystem of other infrastructure providers. The passion comes more from building something that works than from beating a competitor, achieving a certain size or stature, says Van Luijt, who grew sheepish when Nasdaq flashed his photo on its billboard in Times Square when he visited New York last year. “It was like, ‘Thanks for your attention,’ but I’d prefer not to be on the Nasdaq billboard again,” he says.

We believe that makes this a great time for entrepreneurs willing to dive in to tackle mind-numbing complexity, and to forge whatever partnerships and whatever compromises are necessary to make a piece of infrastructure technology a standard part of the larger ecosystem. After all, the best marketing campaign or publicity won’t save a tool that doesn’t work.

On the other hand, this generation of infrastructure builders will need to be comfortable with competing at top speed. “Things moved fast in the web era and the mobile era, but nothing like this,” says Pulumi’s Hoban. “The world of foundation models seems to shift on a daily basis, so product builders need to be able to rapidly integrate new capabilities and approaches to successfully leverage AI in their products.”

The good news is that demand is not a problem for the moment. “If I described a vector database to a CTO or CIO a year ago, they would have said, ‘What would I do with that?’” says Van Luijt. “That has completely changed. Now, every C-level executive has a generative AI strategy or is tasked with coming up with one. All we need to say is: We can power that. So how can we help you?”