Posted:2 months ago| Platform:
Work from Office
Full Time
Design, implement and support an analytical data infrastructure providing ad-hoc access to large datasets and computing power. Interface with other technology teams to extract, transform, and load data from a wide variety of data sources using SQL and AWS big data technologies. Creation and support of real-time data pipelines built on AWS technologies including Glue, Redshift/Spectrum, Kinesis, EMR and Athena. Continual research of the latest big data and visualization technologies to provide new capabilities and increase efficiency. Working closely with team members to drive real-time model implementations for monitoring and alerting of risk systems. Collaborate with other tech teams to implement advanced analytics algorithms that exploit our rich datasets for statistical analysis, prediction, clustering and machine learning. Help continually improve ongoing reporting and analysis processes, automating or simplifying self-service support for customers. Qualifications: 9+ years of industry experience in software development, data engineering, business intelligence, data science, or related field with a track record of manipulating, processing, and extracting value from large datasets. Degree/Diploma in computer science, engineering, mathematics, or a related technical discipline. Experience working with AWS big data technologies (Redshift, S3, EMR, Spark, EKS, Glue, Kafka). Experience building/operating highly available, distributed systems of data extraction, ingestion, and processing of large data sets. Demonstrated strength in data modeling, ETL development, and data warehousing. Experience using big data processing technology using Spark and PySpark. Knowledge of data management fundamentals and data storage principles. Experience using business intelligence reporting tools (Tableau, Business Objects, Cognos, Power BI etc.). SQL, Data Warehousing, End to End knowledge on AWS, and CI/CD. Preferred Qualifications: Experience working with distributed systems as it pertains to data storage and computing. Knowledge of software engineering best practices across the development lifecycle, including agile methodologies, coding standards, code reviews, source management, build processes, testing, and operations. Good to have web crawling and data extraction. Nice to have: Redshift, ETL tools (Talend, Informatica). This position description is intended to describe the duties most frequently performed by an individual in this position. It is not intended to be a complete list of assigned duties but to describe a position level.
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Mumbai, Bengaluru, Gurgaon
INR 32.5 - 37.5 Lacs P.A.
Chennai, Pune, Mumbai, Bengaluru, Gurgaon
INR 35.0 - 42.5 Lacs P.A.
Chennai, Pune, Delhi, Mumbai, Bengaluru, Hyderabad, Kolkata
INR 8.0 - 12.0 Lacs P.A.
Pune, Bengaluru, Mumbai (All Areas)
INR 0.5 - 0.7 Lacs P.A.
INR 2.5 - 5.5 Lacs P.A.
INR 3.0 - 4.5 Lacs P.A.
Bengaluru
INR 3.0 - 3.0 Lacs P.A.
Bengaluru
INR 3.5 - 3.75 Lacs P.A.
INR 2.5 - 3.0 Lacs P.A.
INR 4.0 - 4.0 Lacs P.A.