Responsibilities:

  • Responsible for database modelling, design, Metadata and storage of data in the database
  • Understand the business requirements and enable metrics for further analysis and visualization
  • Optimal data pipeline architecture, assemble large, complex data sets that meet functional and non-functional business requirements
  • Identify, design, and implement internal process improvements: automating manual processes, optimizing data delivery, re-designing infrastructure for greater scalability, etc.
  • Work with data and analytics experts to strive for greater functionality in our data systems.

Required Candidate profile

  • Advanced working SQL knowledge and experience working with relational databases, query authoring (SQL) as well as working familiarity with a variety of databases (TeraData, Oracle, SQL Server)
  • Experience building and optimizing – Cloud data modelling architectures, and data sets.
  • Analysis and query on internal and external data and processes to answer specific business questions and identify opportunities for improvement.
  • Ability to understand visualization tools like Tableau, Power BI etc and create data models accordingly
  • Successful history of manipulating, processing and extracting value from large disconnected datasets
  • Experience supporting and working with cross-functional teams in a dynamic environment
  • Skill in Python, Pyspark, Linux, and Github is mandatory.
  • Experience in big data technologies (Hadoop /Hive)
  • Exposure to cloud technologies like Azure or AWS or GCP is an added advantage
  • Preferred Experience: 2 to 5 years

 

Click here to apply for the position

This site uses cookies to give our users the best experience on our website. By continuing on our website, you are agreeing to the use of cookies. To learn more, you can read our privacy policy.