Posted:3 months ago| Platform:
Work from Office
Full Time
Role & responsibilities Data Pipeline Management: Design, develop, and maintain robust data pipelines that facilitate efficient data processing, transformation, and loading. Optimize these processes for performance and scalability. ETL Processes: Architect and implement Extract, Transform, Load (ETL) processes to integrate and transform raw data from various sources into meaningful, usable formats for data analytics. Data Quality Assurance: Implement data quality checks and validation processes to ensure the integrity and consistency of data. Identify and resolve data anomalies and discrepancies. Scalability and Performance: Continuously monitor and enhance data processing systems to ensure they meet the growing needs of the organization. Optimize data architectures for speed and efficiency. Innovation and Improvement: Stay updated with the latest industry trends and technologies. Proactively suggest improvements to data systems and processes to enhance efficiency and effectiveness. And make sure that do not impact the pipeline and other technical processes and conflicts should not be occurred. Documentation and Compliance: Maintain comprehensive documentation of data processes, architectures, and workflows. Ensure compliance with data governance and security policies. Preferred candidate profile Data Processing Tools: Proficiency in data processing tools such as PySpark and Pandas for large-scale data manipulation and analysis. Databricks: Knowledge of Databricks for collaborative data engineering, data processing. Automation and Templates: Experience with Python for scripting templates and automation scripts. Cloud Platforms: Experience with cloud platforms (e.g., AWS, Azure, GCP) for data storage and processing. Problem-Solving: Strong analytical skills with the ability to diagnose issues and develop effective solutions quickly. Continuous Learning: Enthusiastic about learning new technologies and staying updated with industry trends to drive innovation in data engineering practices. Adaptability: Flexible and adaptable to changing project requirements and priorities. Capable of handling multiple tasks and projects simultaneously. Team Collaboration: Ability to work collaboratively in a team environment and contribute to cross-functional projects. Communication: Excellent verbal and written communication skills to effectively convey technical information to non-technical stakeholders. Any Graduate with Computer Science Background. Mandatory certifications in Python and SQL. Additional certifications in cloud technologies or data engineering are preferred. Good to have certification on Databricks and Data Engineering Concepts 3 to 5 years of experience in data engineering, with a strong focus on data pipeline development, ETL processes, data warehousing, and templates. Experience in working with cloud-based data systems. Perks and benefits Training in Databricks. Support on certifications. Hands-on AWS, Azure, and GCP. Support you on any cloud certification. Build leadership skills.
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Bengaluru, Hyderabad
INR 3.5 - 8.5 Lacs P.A.
Mumbai, Bengaluru, Gurgaon
INR 5.5 - 13.0 Lacs P.A.
Chennai, Pune, Delhi, Mumbai, Bengaluru, Hyderabad, Kolkata
INR 3.0 - 7.0 Lacs P.A.
Chennai, Pune, Mumbai (All Areas)
INR 5.0 - 15.0 Lacs P.A.
Pune, Bengaluru, Mumbai (All Areas)
INR 11.0 - 21.0 Lacs P.A.
Chennai, Pune, Delhi, Mumbai, Bengaluru, Hyderabad, Kolkata
INR 15.0 - 16.0 Lacs P.A.
Pune, Bengaluru, Mumbai (All Areas)
INR 10.0 - 15.0 Lacs P.A.
Bengaluru, Hyderabad, Mumbai (All Areas)
INR 0.5 - 3.0 Lacs P.A.
Hyderabad, Gurgaon, Mumbai (All Areas)
INR 6.0 - 16.0 Lacs P.A.
Bengaluru, Noida
INR 16.0 - 22.5 Lacs P.A.