Element451 is a marketing automation and CRM platform built specifically to help colleges and universities of all sizes reach their enrollment goals. The company’s product covers everything from admissions marketing to enrollment management to student engagement, serving scores of leading institutions.
As with so many industries, higher education is undergoing rapid change, driven to a large degree by the expectations of students. Element451 builds a platform that modernizes the way colleges engage with students. In the process, the company must deliver accurate analytics-ready data quickly.
But Element451’s original home-grown data architecture, centered around MongoDB, couldn’t scale. Data transformations and calculations were manually coded. Due to MongoDB’s performance limitations, the small engineering team had to send data to Redshift for windowing aggregations, then use lambda processes and return the aggregated data back into MongoDB. This process was time-intensive, implementing adjustments and additions was arduous, and performance was sub-par.
It was critical for Element451 to reduce the amount of manual ETL its team was doing, and to move from batch to real-time processing.
“We have years’ worth of data, and we’re storing everything on the user object,” notes Ardis Kadiu, CEO of Element451. “For any data field that changes on a user, you generate a change stream. But MongoDB change streams are very error-prone, and if you have a lot of data, it takes a really long time to go back in history and process any new outputs or any new materialized views.”
Element451 needed to modernize its data architecture to evolve with the higher education industry.
Kadiu knew early on that their customers needed reliable services they could leverage for real time analytics based on responses from their student outreach campaigns. There are multiple possible touch points across the student lifecycle, and he needed more clarity and real-time insight into their journey.
This was why he began looking at streaming solutions that could power the analytics they would deliver to customers, without requiring him to add headcount. They needed:
“With a small team, it was critical that we weren’t reliant on a select few who were maintaining a specific code base and could become a bottleneck,” says Kadiu. “We needed something where someone else could jump in.” In addition, the company needed to avoid hard-coding calculations into the database, using global variables for operational flexibility when things change.
Finally, higher ed is a highly-regulated sector, and Kadiu needed to keep control of all their data and not have it leave their data lake for processing in a 3rd-party service. The fact that Upsolver would run entirely within their AWS VPC helped put Upsolver at the head of its class.
“I knew that Upsolver’s data streaming approach was going to be key for us,” recalls Kadiu. “Having that history and being able to do the windowing and then seeing the data changing in real-time. I wanted our architecture to move in that direction.”
Upsolver gave them the ability to efficiently manage the aggregations and calculated fields and also test their changes and enhancements very quickly – entirely via Upsolver’s visual IDE.
Despite its small size, Kadiu’s data engineering team was able to implement Upsolver with no outside assistance and without hiring outside consultants.
Upsolver ingests MongoDB change streams sent through Amazon Kinesis. Then Upsolver processes critical user analytic events (when actions were taken, how many times an action was taken, the source of the action, and so on) into calculated traits and pushes them back into the CRM. API-fed queries on Upsolver Lookup Tables make the data available to Element451’s analytics customers, essentially enabling Element451 to productize its analytics function.
Soon after rolling out Upsolver to deliver calculated fields and traits, the team found other opportunities both to offload existing processes from legacy systems to the data lake, plus develop new analytics that can be productized. Today, everything from identity stitching (finding users with specific IDs) to data deduplication to aggregated campaign-level analytics processing are performed through Upsolver, stored in the data lake, and made accessible via Upsolver Lookup Tables. Says Kadiu, “Upsolver is so much more than a pipeline for us. It’s the compute engine that powers our data lake.”
In addition, Kadiu’s team can now ingest and process data from new SMS and MMS message services, merge them with customer data, and display the processed data as internal dashboards the Finance team can use to monitor customer usage. The dashboards serve double-duty, acting as an internal proving grounds before the company exposes data to customers – for example, for schools to see their usage charges in near real-time.
This has enabled the team not only to consolidate complicated processes and eliminate technical debt associated with their custom-coded MongoDB pipelines, but also to open up a whole new avenue for easily adding functionality and creating new services they can then productize and deliver to the schools that rely on this data. Now the Element451 engineering team focuses on its core business – not manual coding and pipeline maintenance.
“We’re using the Upsolver managed data lake as a core function of our product,” says Kadiu. “I really like that approach, because we can vet out the technology, and then our customers get all that visibility. And then that opens up our margins in the long run because it becomes self-service.”