With AI workloads demanding distributed compute and low-latency access, Tigris offers an alternative to AWS, Azure, and Google Cloud by building decentralized, AI-native storage infrastructure.
The AI Boom Exposes Cloud Storage Gaps
The surge in generative AI startups has dramatically increased demand for distributed computing power. Companies like CoreWeave, Lambda Labs, and Together AI have risen to meet that demand, but storage infrastructure hasn’t kept pace.
- Traditional providers like AWS, Google Cloud, and Azure built storage systems optimized for centralized compute, not the decentralized, high-speed needs of AI models.
- This limitation has created latency issues and cost inefficiencies, especially for startups managing billions of small files and real-time inference.
Meet Tigris: Storage That Moves with Compute
Tigris Data, co-founded by Ovais Tariq—a key architect of Uber’s storage platform—aims to solve the storage bottleneck by decentralizing the data layer.
- Its AI-native storage platform automatically replicates data to where GPUs and compute resources are located.
- It supports low-latency access, which is critical for training, inference, and agentic workloads like real-time image, video, and voice processing.
Tariq’s core thesis: “Without storage, compute is nothing.”
Tigris Secures $25M to Build a Global Network
To scale its infrastructure, Tigris recently raised a $25 million Series A led by Spark Capital, with existing investor Andreessen Horowitz participating.
- The startup already operates data centers in Virginia, Chicago, and San Jose.
- Expansion plans target Europe and Asia, beginning with London, Frankfurt, and Singapore.
Since launching in November 2021, Tigris claims to have grown revenue 8x annually.
Breaking Free from the “Cloud Tax”
A central motivation behind Tigris is avoiding egress fees—charges imposed by cloud giants when customers move or download their own data.
- Tariq calls these fees a “cloud tax”, arguing they trap companies within a single ecosystem.
- For startups like Fal.ai, a Tigris customer, these fees once made up the majority of cloud spend.
“Egress fees were just one symptom of a deeper problem: centralized storage that can’t keep up with a decentralized AI ecosystem,” said Tariq.
Built for the Latency-Critical AI Era
Generative AI workloads are highly latency-sensitive. Whether training models or deploying them for inference, time delays can hinder performance and user experience.
- Tigris’s model of localized storage ensures data is close to compute, minimizing lag.
- This is especially vital in use cases like audio-based AI agents or real-time image generation, where milliseconds matter.
“Tigris lets us scale across clouds with a unified file system and zero egress fees,” said Batuhan Taskaya, head of engineering at Fal.ai.
Data Sovereignty and Control
Beyond performance and cost, Tigris taps into a growing need for data ownership.
- Enterprises in sectors like finance and healthcare must store data compliantly, often in specific jurisdictions.
- As Tariq notes, firms are increasingly cautious about handing over data to cloud providers, citing cases like Salesforce blocking Slack data from competitors.
“They want to be more in control. They don’t want someone else to be in control of it,” he said.
A New Frontier for AI Infrastructure
Tigris is betting that AI’s future lies in decentralization—not just in compute, but in storage as well. Its goal is to be the backbone for the new generation of AI-native workloads that traditional cloud providers weren’t designed to serve.
- Its unique selling point: low-latency, egress-free, distributed storage built for modern AI.
- With backing from top-tier VCs and an expanding global footprint, Tigris aims to be the anti-Big Cloud—and it’s gaining ground fast.








