Stream processing is a critical part of the big data stack in data-intensive organizations. Tools like Apache Storm and Samza have been around for years, and are joined by newcomers like Apache Flink and managed services like Amazon Kinesis Streams.
Is it time to move your data lake to the cloud? As with any infrastructural choice, there are advantages and trade-offs to deploying in the cloud vs on-premises, and the decision needs to be made on ad-hoc basis based on considerations such as scale, cost, and available technical resources.
This article covers best practices for reducing the price tag of Elasticsearch using a data lake approach. Want to learn how to optimize your entire streaming data infrastructure? Check out our technical whitepaper to learn how leading organizations generate value from cloud data lakes. Get the paper now!
Elasticsearch is a fantastic log analysis and search tool, used by everyone from tiny startups to the largest enterprises. It’s a robust solution for many operational use cases as well as for BI and reporting, and performs well at virtually any scale - which is why many developers get used to ‘dumping’ all of their log data into Elasticsearch and storing it there indefinitely.
Streaming data is becoming a core component of enterprise data architecture. Streaming technologies are not new, but they have considerably matured over the past year. The industry is moving from painstaking integration of technologies like Kafka and Storm, towards full stack solutions that provide an end-to-end streaming data architecture.
Today we’ve got some great news for organizations that have multiple users working on Upsolver, or anyone who likes to fiddle with the system and make frequent changes to data sources, output streams, aggregations or other features. Thanks to Upsolver’s new built-in Git integration you can have multiple users fiddling away, safe in the knowledge that all your work will be securely stored and easily recoverable in your Git repository.
As the end of the year rapidly approaches, it’s time to take a look at what the next one might have in store.
We’re closing the year strong with some great new features that can help improve the breadth and versatility of your work with Upsolver. Highlights include:
More often than not, big data is made up of a lot of small files. Event-based streams from IoT devices, servers or applications will typically arrive in kb-scale JSON files, easily adding up to hundreds of thousands of new files being ingested into your data lake on a daily basis.
Writing small files to an object storage (Amazon S3, Azure Blob, HDFS, etc.) is easy enough; however, trying to query the data in this state using an SQL engine such as Athena or Presto will absolutely kill both your performance and your budget.
As a company founded by data professionals, data security is of our utmost concern. Upsolver offers a comprehensive set of protections to provide the highest level of security to all sensitive data that is processed or managed using the Upsolver platform. Upsolver uses a cloud-native architecture to keep customers’ data safe in their own AWS account, while offering additional layers of security to prevent any unauthorized access to data through the Upsolver front-end UI.
So you’ve decided it’s time to overhaul your data architecture. What’s next? How do you go about building a data lake that delivers the results you’re expecting.
Well, we’re strong believers in the notion that an example is worth a thousand convoluted explanations. That's why this post is all about real-life examples of companies that have built their data lakes on Amazon S3. Use it for inspiration, reference or as your gateway to learn more about the different components you'll need to become familiar with for your own initiative.