Résumé:
The main purpose of this thesis paper deals with large and heterogenous formats of data.
The reason behind why Big Data is so immense goes back to the five V’s: Variety, Veracity,
Volume, Velocity and Value. Our research aims to tackle the Variety and Value aspect of big
data. Compromised within our research, we will be working in a Data Lake environment. DL’s
are made up with several components such as; Data Ingestion, Meta Data, Data Governance,
Data security, etc. The module we have chosen to work on is Data Ingestion. Our study’s aim
is to ingest massive volumes of information from various sources into a Lake environment. To
ingest our data, we will be implementing the Extract, Load, Transform (ELT) process instead of
Extract, Transform, Load (ETL). The reason behind this decision was because we’re working in
a Data Lake environment, so data must be loaded in AS IS format with light transformations
only. After exploring various data ingestion frameworks, we came across several solutions. The
one that stood out from the crowd was Apache Spark. After thoroughly analyzing the framework,
we found a couple of missing elements. After adopting Sparks framework, we proceeded to
extend it by adding two of our features. The first is a Data Classifier and the second is a Data
Visualizer. The new data ingestion platform has been developed in PyCharm IDE, Apache Spark
3.0.0, using Python 3.6, under Ubuntu 20 and the Data Lake we chose is Hadoop.
Keywords:
Data Lake, Data Ingestion, ELT, Data Classifier, Data Visualizer and Big Data.