Data Engineer II

Job Overview

INDIA - Chennai
About LatentView:
  • LatentView Analytics is a leading global analytics and decision sciences provider, delivering solutions that help companies drive digital transformation and use data to gain a competitive advantage. With analytics solutions that provide 360-degree view of the digital consumer, fuel machine learning capabilities and support artificial intelligence initiatives., LatentView Analytics enables leading global brands to predict new revenue streams, anticipate product trends and popularity, improve customer retention rates, optimize investment decisions and turn unstructured data into a valuable business asset.
  • We specialize in Predictive Modelling, Marketing Analytics, Big Data Analytics, Advanced Analytics, Web Analytics, Data Science, Data Engineering, Artificial Intelligence and Machine Learning Applications. 
  • LatentView Analytics is a trusted partner to enterprises worldwide, including more than two dozen Fortune 500 companies in the retail, CPG, financial, technology and healthcare sectors.

Job Description:
  • Azure Data Engineer who has good knowledge on the ETL process in Azure Architecture.
    Responsible for designing , building and implementing Azure ETL pipelines.
  •  Work with Business Analysts to understand requirements and perform data discovery. Assure that data is cleansed, mapped, transformed, and otherwise optimised for storage and use according to business and technical requirements. 
  • The ability to perform ETL development and deploy production code (with unit testing, continuous integration, versioning etc.)
  • 3+ years of relevant experience in Azure ETL development. 
  • Experience working with Microsoft’s modern data warehousing and analytics platforms.
  •  Highly experienced in Azure data factory, Azure Databricks, Azure Synapse Analytics, Azure DataLake. 
  • Good knowledge on Azure Architecture and ETL pipelines on Azure.
  •  Strong proficiency in Microsoft SQL Server development, including i)ability to query and investigate data and to write views, stored procedures and other objects using T- SQL. ii)understanding and creating entity relationship models. 
  • Experience in databricks – clusters, performance optimization techniques. 
  • Good knowledge in programming with Python/Pyspark.
  •  Create reusable and scalable ADF pipelines. 
  • Knowledge of Azure DevOps and version control systems, ideally Git.
  •  Azure DP-203 and DP-900 certifications will be an added advantage.