Accelerating our voyage across the cloud data lake

Our Series B funding, new Board member, and new Upsolver Community Edition.

They say good things come in three. Well, today we announced three items critical to the continued success of the company and the customers we serve:

  • Our Series B funding
  • Our new board member, Ariel Tseitlin
  • Our “free forever” Upsolver Community Edition

First, we announced that we have raised $25M in a Series B funding round, which is a reward for our growth to date — we tripled revenue last year — and gives us the opportunity to accelerate development of our platform and get the word out to those who have yet to hear about Upsolver. The top-tier venture investors from our Series A all participated, plus the round was led by a new investor, Scale Venture Partners. Scale is known for having made early investments in well-known SaaS success stories such as Box, DocuSign and RIngCentral. 

Second, Ariel Tseitlin, a partner at Scale, has joined Upsolver’s board of directors. Ariel has a deep background in databases, having come from Oracle, and understands scaling SaaS businesses, having been responsible for Cloud Solutions at Netflix. 

Third, we have launched our Upsolver Community Edition, which will help us put the power of our no-code data lake engineering platform into the hands of more data practitioners. Anyone who knows SQL can use the visual UI in Upsolver Community to build, test and run data lake engineering jobs free of charge. They can explore and understand the product in our sandbox, or integrate it with their cloud account to run jobs on their own data. 

Upsolver has come a long way over the past year, despite the turmoil caused by the pandemic. We tripled our revenue, now manage hundreds of workloads in production, are well-established as part of the AWS ecosystem, including a privileged position as the only named partner for the AWS Athena query engine, and offer Upsolver on the Azure platform and marketplace as well. 

The story of Upsolver is really the redemption story of the data lake. The data lake emerged to affordably handle big data analytics. But it was not a complete answer, as it created a very affordable raw data store, but required enormous amounts of work to make the data in the lake analytics-ready. 

The problem with the data lake is that it’s only half a database. You’re replacing a single product like Oracle with an architecture. As much as big data mavens like to mock Oracle as a dinosaur, they did automate an awful lot of what we now call “data engineering”. With the data lake, this is now the customer’s job, which has created an entire cottage industry of services, software tools and hand-coding as companies try to stitch together raw data into analytics-ready data sets. In turn this has made extracting the value of raw cloud data complex, time-consuming and expensive.

Upsolver’s claim to fame will be to bridge the gap between the cloud data lake and what you’d expect from a traditional database, removing the current tradeoff between usability and affordability.  This manifests itself as no more hand-coding, SQL as the lingua franca for the data lake, and no longer a need to understand the internal workings of distributed systems.   

In short, our product goal is to make the cloud data lake as easy to use as a database.

Doing this opens up a world of possibilities. When you need to analyze data landing in your cloud data lake you can prepare and present it to the analytics engine of your choice, which could be a standalone query engine like AWS Athena or Presto, or a complete data store such as Redshift, Snowflake or Elasticsearch.  You can make these decisions based on economics and performance tradeoffs for the use case. 

Which brings us to the topic of openness. A great promise of the data lake was that it would act as an open system that would interconnect various and changing upstream sources and downstream analytics systems.  Upsolver will ensure that promise is delivered so that customers don’t find they have replaced the Oracle monolith with another one in the cloud. 

So what’s in it for our customers?  

Lower cost: They save money as they execute analytics projects on low cost cloud compute and storage with near-zero data engineering overhead. 

Agile analytics:  Analytics projects get delivered faster with higher predictability, are more reliable in operation, and are easier to update. We find that customers can deliver in days what would have taken months or quarters of hand-coding and tuning using Spark.

More users empowered via self-service: Upsolver opens up building data lake analytics to a wide range of SQL-savvy data practitioners beyond data engineers, including data scientists and analysts plus business users such as product management and operational managers.

Best-of-breed analytics tooling: It is extremely unlikely that one vendor will deliver the best product for ad hoc analytics, log analytics, search analytics, machine learning and myriad other analytics types.  By sourcing data in the lake, Upsolver frees customers to make best-of-breed choices based on use case, and to mix, match and switch analytics tools as needed. 

Act on real-time data:  Since Upsolver delivers data continuously at scale, customers can use their data lake for workloads reliant on real-time and streaming data, maintaining data freshness and exactly once delivery. 

As we embark on the next phase of our journey, we are grateful to our investors, customers and partners, and intend to make them proud.

Share with your friends

Learn more about Upsolver

Visit our homepage
data lake ETL Demo

Start for free with the Upsolver Community Edition.

Build working solutions for stream and batch processing on your data lake in minutes.

Get Started Now