top of page
Advertisement
Job Description:
Job Requirements:
Hadoop, Py Spark, Hive, Python, Apache Spark, Programming Development, ETL Pipeline
Experience and Skills Required:
Experience in Big data Distributed ecosystems – (Hadoop, Py Spark, Hive)
Working with large amounts of data
Experience using Python for DE context – data transformations, data wrangling, ETL, API interaction
Excellent knowledge of SQL(optimizations, complex aggregations, performance tuning) and relational DB
Experience building data processing frameworks and big data pipelines.
Qualification: Any Graduate
Batch: 2018/ 2019/ 2020/ 2021/ 2022
About Company:
bottom of page