Today we’ve got some great news for organizations that have multiple users working on Upsolver, or anyone who likes to fiddle with the system and make frequent changes to data sources, output streams, aggregations or other features. Thanks to Upsolver’s new built-in Git integration you can have multiple users fiddling away, safe in the knowledge that all your work will be securely stored and easily recoverable in your Git repository.
As the end of the year rapidly approaches, it’s time to take a look at what the next one might have in store.
We’re closing the year strong with some great new features that can help improve the breadth and versatility of your work with Upsolver. Highlights include:
More often than not, big data is made up of a lot of small files. Event-based streams from IoT devices, servers or applications will typically arrive in kb-scale JSON files, easily adding up to hundreds of thousands of new files being ingested into your data lake on a daily basis.
Writing small files to an object storage (Amazon S3, Azure Blob, HDFS, etc.) is easy enough; however, trying to query the data in this state using an SQL engine such as Athena or Presto will absolutely kill both your performance and your budget.
As a company founded by data professionals, data security is of our utmost concern. Upsolver offers a comprehensive set of protections to provide the highest level of security to all sensitive data that is processed or managed using the Upsolver platform. Upsolver uses a cloud-native architecture to keep customers’ data safe in their own AWS account, while offering additional layers of security to prevent any unauthorized access to data through the Upsolver front-end UI.
So you’ve decided it’s time to overhaul your data architecture. What’s next? How do you go about building a data lake that delivers the results you’re expecting.
Well, we’re strong believers in the notion that an example is worth a thousand convoluted explanations. That's why this post is all about real-life examples of companies that have built their data lakes on Amazon S3. Use it for inspiration, reference or as your gateway to learn more about the different components you'll need to become familiar with for your own initiative.
Apache Kafka is a cornerstone of many streaming data projects. However, it is only the first step in the potentially long and arduous process of transforming streams into workable, structured data. How should you design the rest of your data architecture to build a scalable, cost effective solution for working with Kafka data? Let’s look at two approaches - reading directly from Kafka vs creating a data lake - and understand when and how you should use each.
The following article is an abridged version of our new guide to Data Lakes and Data Lake Platforms - get the full version for free here.
If you’re working with data in any capacity, you should be familiar with Data Lakes. Even if you don’t need one today, the rapid growth of data and demand for increasingly versatile analytic use cases (such as reporting, machine learning, and predictive analytics) could result in your organization outgrowing its data infrastructure much sooner than you currently foresee.
If you only read the bombastic headlines, you might be forgiven for thinking that Big Data is the name of a real-life superhero: fighting crime, busting traffic jams and even curing diseases. But when you work with data for a living, you quickly find out that underneath the shiny facade, ‘doing big data’ is also a major pain.
Mentioning the words “migration” or “database refactoring” to a typical DBA is unlikely to help you make new friends. Most organizations are extremely averse to changing their data infrastructure, which is often assumed to be a long, arduous and expensive process. Well, It doesn’t have to be this way, since with the right tools you can build a scalable big data infrastructure on AWS in just a week or two; still, change is always scary.