Blog

Granica: Catalyzing AI Efficiency in the Concurrent Eras of AI and Efficient Growth

by Pete Sonsini, Vanessa Larco and Mustafa NeemuchwalaJun 08, 2023

History may not repeat itself, but it often rhymes. And thanks to our multi-decade history of backing next-gen technologies, we at NEA get to see history rhyme over and over again. Reminding us of our multibillion-dollar outcomes in AI platforms like Databricks and DataRobot and inline technologies like Data Domain and Cloudflare, Granica emerges from stealth to introduce the industry’s first AI efficiency platform, bringing decades of fundamental research to enterprises – and we’re beyond excited to announce that NEA has led Granica’s Series A.

The rise of AI has driven growth in the data needed to pre-train, finetune, prompt-tune, and inference models in production, along with exploding costs across the emerging AI stack that are linked to the quantity & heterogeneity of data, size of models (parameters from the many billions to 1T+), time spent fixing model misbehaviors & drifts, and general complexity in scaling & deploying AI. Much of these costs are fundamentally driven by data inefficiency, as training data with low information value has proliferated. Such data often contains significant redundancy and sensitive information including personally identifiable information (PII) that increase costs, legal/compliance risk, and the time it takes to move data through the AI pipeline. Cutting these inefficiencies requires a new layer in the AI stack purpose-built to reduce redundant, low value, and sensitive data while increasing information density and improving the bias & performance of models. Granica is building this efficiency layer across the AI stack.

Granica’s first game changing use case starts with AI data efficiency for cloud object stores – including Amazon S3 and GCS (with Azure Blob coming soon) – that have become the dominant cloud storage medium with their inherent API accessibility and scale-out capabilities at relatively low cost, a massive shift from prior decades of retaining structured data (databases, mostly) in block stores and unstructured data (everything else) in file systems. In what is now history, we saw the vibrant ecosystem surrounding block & file storage systems. Data Domain pioneered data duplication to compress the backup images from these systems. Unfortunately, classic data deduplication techniques are ineffective for cloud object storage for several reasons: they have challenges scaling out, they add material latency, and more fundamentally, the natural boundaries for detecting duplicate sequences of bytes are highly variable and hidden from these algorithms.

Paired with the rise of cloud object stores, the widespread adoption of AI trained on mountains of unstructured data is the real “why now” for Granica. As McKinsey reports, enterprise AI adoption has more than doubled since 2017, but according to BCG a mere 10% of organizations achieve significant financial return on AI. [1] Data is the fundamental fuel for AI and regardless of how large or small models are, they need more data than ever – particularly unstructured data stored in object stores – further raising the base costs for getting AI benefits.

We believe the market timing for Granica could not be any better. According to IDC, if speed of adoption and scale of use were top considerations in the last “cloud first” decade, this decade is about adding control with cloud economics, efficiency, and sustainability. In a recently conducted IDC survey, “value for money” was the highest rated attribute for selecting a cloud technology partner, as the base cloud infrastructure layer has become commoditized. [2] 100% aligned with customers, Granica brings to the market a revolutionary outcome-based pricing model that only charges a small percentage of savings generated per month, only if savings are generated – guaranteeing ROI. The ultimate customer-centric solution, customers only pay for value received, take no financial risk to try or expand usage, and don’t need to find or allocate budget.

Granica's incredible breakthroughs help dramatically cut cloud costs. The company’s AI-enabled byte-granular inline data reduction of large-scale sensor, image and textual AI data losslessly reduces AI-related storage costs by up to 80%, running transparently in the background. [3] Granica also provides privacy preservation for PII and other sensitive data, keeping data safe while accelerating downstream AI workflows. Granica’s efficiency services are easily consumed as an API by developers building applications using data (especially training data) from object stores. Granica offers enterprises the simplest and most secure way for businesses to cut the costs of hot AI data without having to resort to archival or deletion, with its insertion point strategically inline at the start of the AI pipeline enabling easy integration (depicted below).

Granica's strategy of fusing fundamental, novel research in data-centric AI with large-scale systems engineering to bring novel products to the market is remarkable. Their research team – led by their chief scientist Andrea Montanari, a pioneering expert in information theory and a professor at Stanford University – is already making groundbreaking advancements at a rapid pace.

When we first met Rahul Ponnala and Tarang Vaish – seasoned data experts and former engineers at Pure Storage and Cohesity – we saw the massive ambition, tenacity, and execution ability we have seen in decades prior from generational founders like Ali Ghodsi, Matei Zaharia, Jeremy Achin, Kai Li and Matthew Prince. Rahul & Tarang have a crystal-clear ability to articulate Granica’s massive strategic vision in the near term and distant future. They want to build an iconic company the right way and are customer-obsessed. They care deeply about company culture and winning fairly. They are an absolute joy to work with.

With much more in store in coming years, Granica launches today with its flagship Crunch data reduction service and its first add-on service Granica Screen service for data privacy and security. We believe Granica is particularly optimized for AI-centric and data-centric industries such as geo-spatial intelligence, autonomous vehicles, robotics, retail, and ecommerce. Category-defining companies including HERE Technologies, Quantum Metric, and Nylas use Granica to cost-effectively keep, grow and use their AI data to maximize innovation and business results from their AI initiatives.

We at NEA couldn’t be more excited to partner with Rahul, Tarang, and the entire Granica team. Visit granica.ai to learn more.

Notes and Sources

List of source citations

  1. From McKinsey’s “The state of AI in 2022—and a half decade in review” from December 2022. https://www.mckinsey.com/capabilities/quantumblack/our-insights/the-state-of-ai-in-2022-and-a-half-decade-in-review

  2. From IDC’s “The Era of FinOps: Focus is Shifting from Cloud Features to Cloud Value” from February 2023. https://blogs.idc.com/2023/02/22/the-era-of-finops-focus-is-shifting-from-cloud-features-to-cloud-value/#:~:text=If%20speed%20of%20adoption%20and,industry%20sectors%20and%20countries%20today.

  3. Based on Granica customer observations

About the Authors

Pete Sonsini

Pete is a Venture Advisor at NEA, focused on early-stage software investing. He was GP and head of the firm’s Enterprise investing practice for more than a decade. Pete led NEA’s early investment in several companies that became unicorns, including Databricks, Upstart, Anyscale and Instabase. Before his venture career, he held sales and product management jobs at HP, and also was an early employee at VMware, where he built the OEM channel from the ground up. He has BA from UC Berkeley, MBA from Kellogg and was Varsity Letterman and National Collegiate Champion in rugby while at Cal. He currently sits on the board of Cal Rugby.
Pete is a Venture Advisor at NEA, focused on early-stage software investing. He was GP and head of the firm’s Enterprise investing practice for more than a decade. Pete led NEA’s early investment in several companies that became unicorns, including Databricks, Upstart, Anyscale and Instabase. Before his venture career, he held sales and product management jobs at HP, and also was an early employee at VMware, where he built the OEM channel from the ground up. He has BA from UC Berkeley, MBA from Kellogg and was Varsity Letterman and National Collegiate Champion in rugby while at Cal. He currently sits on the board of Cal Rugby.

Vanessa Larco

Vanessa joined NEA as a Partner in 2016 and focuses on enterprise and consumer investing. She has led investments in Assembled, Kindred, Rewind AI, Cleo, Evernow, Rocket.Chat, and Mejuri, among others. She is also a board observer at Forethought, SafeBase, Orby AI, Granica, Modyfi, and HEAVY.AI. She was a board observer at Robinhood until its IPO in 2021. Prior to Venture, she led product teams at Box, Twilio, Disney, and Xbox.
Vanessa joined NEA as a Partner in 2016 and focuses on enterprise and consumer investing. She has led investments in Assembled, Kindred, Rewind AI, Cleo, Evernow, Rocket.Chat, and Mejuri, among others. She is also a board observer at Forethought, SafeBase, Orby AI, Granica, Modyfi, and HEAVY.AI. She was a board observer at Robinhood until its IPO in 2021. Prior to Venture, she led product teams at Box, Twilio, Disney, and Xbox.

Mustafa Neemuchwala

Mustafa joined NEA’s Technology team in 2021. His investment interests across stages include AI, cybersecurity, developer, data, technically differentiated application software, and fintech. Before NEA, Mustafa advised on tech M&A deals at Qatalyst Partners across developer, cybersecurity, data, infra, fintech, deep tech, and consumer internet. After early years in Tokyo and Mumbai, Mustafa moved to the Dallas suburbs and is a proud Texan. He graduated from the University of Texas at Austin, studying fundamental and quantitative finance, liberal arts, computer science, and mathematics.
Mustafa joined NEA’s Technology team in 2021. His investment interests across stages include AI, cybersecurity, developer, data, technically differentiated application software, and fintech. Before NEA, Mustafa advised on tech M&A deals at Qatalyst Partners across developer, cybersecurity, data, infra, fintech, deep tech, and consumer internet. After early years in Tokyo and Mumbai, Mustafa moved to the Dallas suburbs and is a proud Texan. He graduated from the University of Texas at Austin, studying fundamental and quantitative finance, liberal arts, computer science, and mathematics.