ETL & ML

Try Domo for yourself.

Completely free.

ETL Data Pipeline & Machine Learning (ML)

Even if you’re not a technology company—or even if you are but aren’t specifically focused on artificial intelligence—you’ve likely considered how your company can use AI to make your operations more efficient and profitable.

The beauty of AI is that it works in so many ways and is applicable across nearly every industry and vertical. Taking advantage of it now may help you stay ahead of the competition. Waiting to integrate it into your business means you could fall behind. 

The most critical aspect of getting AI ready and effectively working for your organization is ensuring you have the right data foundation to build effective AI models and tools. If you don’t have clean, usable data upstream, then your AI tools won’t be nearly as effective. This means creating an extract, transform, and load (ETL) pipeline that will clean and normalize your data so you can build machine learning (ML) algorithms on top of it. 

AI needs clean and effective data to work. ETL data pipelines ensure your data is available, cleaned, and transformed into a usable tool for your organization. Because ETL is such a critical piece of enabling an organization to build on data, why not consider integrating AI and ML into the ETL process? 

Here are some things to consider with ETLs, why they’re so important and—though a relatively simple process—why they might be complex to manage.

What is ETL?

The extract, transform, and load process in data is part of the upstream data management process. It typically occurs early in your data processes and helps set the stage for data to be used effectively. ETL has three phases or stages: extract, transform, and load. 

  • Extract: This stage parses the data from the data sources and gets the data ready to be transformed.
  • Transform: This stage maps the data into a format that will allow it to better integrate with analytics platforms and other data sets. It also cleans the data by removing outliers, flagging incomplete data, and getting rid of empty data.  
  • Load: This final stage delivers the data into a system or tool where it can be analyzed and acted on. 

Every piece of data created is unique. Often, the tools and processes that create data do so in a way that won’t combine well with other kinds of data. For example, let’s look at healthcare data. It sounds fairly straightforward to integrate a patient’s healthcare data into one place to track a diagnosis and the cost of treatment for a single patient. But this can quickly get tricky.

Many healthcare providers utilize electronic health record (EHR) software to chart patient data and keep track of patient visits. The problem is that there are multiple kinds of EHR tools, and if a patient sees multiple providers for one diagnosis (such as a primary care provider, radiologist, oncologist, surgical oncologist, plastic surgeon, and pharmacist), it’s likely that some of them use different EHR tools. While each EHR tool tracks roughly the same patient data, there will be inconsistencies that may make it difficult to combine the patient data into one place. 

For example, one EHR might track weight in pounds while another uses kilograms. The date field may use an MM/DD/YYYY format in one while another spells out the month. One might have a drop-down menu to select the diagnosable condition; another might have a notes field where the doctor enters the diagnosis and accompanying notes about the symptoms. 

Another tool might include the initial cost of treatment but won’t include the insurance-negotiated rates that come back after insurance is billed. In some settings, filling out the entire EHR for the patient doesn’t make sense, so the provider fills in only the applicable information and leaves other fields with “null” or void values. 

Manually tracking this patient across provides—getting an accurate and holistic idea of symptoms, care, costs, and payments—quickly becomes impossible. An ETL tool helps manage the impossibility. It acts as a great equalizer for your data. 

Continuing this example, an ETL pipeline would begin by getting data from each EHR, payment, and insurance claim. It could connect directly to the tool and extract the data, or it could work with an exported version of the data in a .csv or Excel file. In a manual ETL process, an employee looks at what data is available and compares it to how the data will be used. They then set up a mapping process to normalize the data. This means they choose one format for dates, weight, symptoms, and whatever other data needs to be tracked and then map the data into this new format as part of the transform process. As a company becomes more familiar with data types and processes, it can set up tools to automate most or all of the ETL data processes. 

Normalizing the data means that no matter the source of the data, it can all be mapped and transformed in a way that data from different sources can be accurately and easily combined or compared to each other. This normalized data is then delivered and loaded to the next phase of the data pipeline; often, this is a quality checker, data warehouse, or business intelligence tool.

Importance of ETL

ETL has a big job, and as data becomes larger and more complex, it will be difficult for ETL pipelines to keep up, especially if companies are setting up manual pipelines for every data source. At best, this will significantly delay in getting data integrated and ready for analysis. At worst, it’s a completely unsustainable practice that will keep your organization from integrating valuable data sources. 

Integrating machine learning into your ETL pipeline is a logical step for ensuring your company can both use and benefit from the massive amounts of data generated. Machine learning can help improve both the efficiency and the effectiveness of your ETL pipelines, creating processes that are scalable and more accurate for your company moving forward.

How machine learning works in an ETL pipeline

Machine learning algorithms are best utilized to complete very specific tasks. These algorithms are designed to provide specific outputs on the data they’re given with little human intervention. They are particularly good at identifying patterns and classifying and grouping data. 

This works well in ETL, where data needs to be reclassified into a usable format. Once a specific type of data has been mapped, it’s easy to map data from similar sources, which is where utilizing ML could be very effective. Once you’ve trained the algorithm to map data from similar sources, ML tools could be deployed to manage new data feeds with little to no human intervention.

The pitfalls of ETL without ML

One of the biggest problems with ETL is human management. Data grows at an incredible pace, and companies wanting to utilize that data are often running into bottlenecks of human bandwidth to manually create ETL pipelines for each data source. It’s unsustainable to have a person managing every ETL pipeline, even if they have tools that help automate aspects of it. Using ML to help automate the process of establishing new pipelines and automatically integrating similar data sources ensures your team can move on to integrating more complex data and reaping the benefits of all the data you need. 

Plus, people could introduce errors into the data. To have robust data analytics processes downstream, you need to be able to trust that the data coming out of your ETL pipeline is error-free. Because data is never clean and can have many errors before it reaches ETL, utilizing ML tools to help identify and remove inaccurate or unusable data at the beginning will help increase the reliability of your data analysis downstream. 

Here’s where ML can have the biggest impact on ETL:

Ingestion

ML algorithms can be used to identify usable data and what data needs to be extracted to meet downstream data analysis needs as part of the extract stage. Once a data feed has been established for one type of tool or data source, ML algorithms can learn from that ETL process to apply logic to extract data from similar data sources. Once an organization has mapped data from one EHR tool, an ML algorithm can be trained to identify and extract similar data from other tools automatically, speeding up the process of getting data from multiple sources into one place. 

Normalizing

Data is messy, and it’s likely going to come with errors, outliers, and variations. Using ML algorithms to identify and automatically reject unusable data ensures your team gets only clean data that can be used for analysis. Then, your ML tools can ensure that data is properly mapped so every piece of data you need can easily be combined or compared to each other in a logical way. 

Data governance

Because ML algorithms are great for classifying and grouping data, you can use ML tools to help establish a strong data governance foundation as part of your data ingestion processes. Using ML tools to classify and group data based on origin and data lineage ensures your company can have high-level control over how and why data is accessed. 

Real-time processing

In the past, when people needed to be deeply involved in ETL processes, they would do something called batch processing. This is where they would run ETL on data files at a specific recurrence—sometimes daily, weekly, or monthly. While batching helped people streamline their processes, focus on one data pipeline at a time, and ensure all the usable data was collected, it also created serious lag time by delaying when normalized data was available for analysis. ML tools can help process data in real time or nearly real time. They continually process data as it comes in and ensure that downstream users are able to get the most up-to-date information available for their analysis.

Benefits of machine learning in ETL

Here are some of the benefits of integrating machine learning algorithms into your ETL processes. 

  • Efficiency. ETL is very repetitive, which makes it a good candidate to be improved with ML algorithms. Use ML particularly to map and normalize data during the transformation stage. By automating routine processes within the ETL pipeline, teams can integrate more data and focus on more high-level tasks supporting your upstream data management processes. 
  • Data quality. One of AI’s biggest benefits is that people can unintentionally introduce errors into data processes. However, utilizing ML models to manage data processes can help your team reduce errors and ensure higher-quality data throughout your pipeline. Reliably accurate data will help your team develop trust in the data and ML tools and lead to better insights based on the data. 
  • Scale. Every company wants processes that will meet current needs and easily adapt to future needs. The data you create and act on will continue to grow exponentially. Using ML in the ETL process ensures your company can adapt to future needs and integrate new data sources. Using these ML algorithms in the ETL process ensures your company will be able to process vast amounts of data well into the future. 
  • Enhanced AI downstream. Utilizing ML as part of your upstream data processes means you can create better systems to support additional AI in your downstream data management—whether you’re using real-time data to react quickly to fluctuating market trends or incorporating more complex data feeds for deeper analysis.

Experience the power of machine learning and ETL with Domo’s data integration.

RELATED RESOURCES

Article

Creating robust ETL pipelines

Article

Best practices for data governance

glossary

Machine Learning

Ready to get started?
Try Domo now or watch a demo.