We are excited about the launch of 7Dxperts, as part of our teams ongoing commitment to driving growth and innovation in the space data, analytics, ML and geospatial . To ensure our continued growth and focus, we made the strategic decision to spin out the analytics business from zsah ltd. This move will enable us to invest more in our propositions and our staff while pushing the boundaries of what's possible in the realm of data. We firmly believe that targeted solutions designed for specific use cases hold more power than generic solutions. Therefore, at the core of the business, its about bringing people together who care about customers, have passion to solve problems and xpertise in building targeted accelerators/solutions for industry specific problems. 📌 Visit our website to get to know us better.
Not specified
INR 10.0 - 15.0 Lacs P.A.
Hybrid
Full Time
Role & responsibilities Develop interactive maps using libraries and technologies like Leaflet.js, Mapbox, Google Maps API, and OpenLayers. Implement H3 Indexing for spatial partitioning and optimization to improve data analysis and map rendering performance. Manage and optimize geospatial data querying, storage, and transformation using Snowflake and Databricks. Leverage DuckDB for efficient local geospatial querying and real-time analysis. Develop and maintain clean, scalable, and type-safe code using TypeScript for frontend and backend geospatial solutions. Build spatial queries, conduct geospatial analysis, and optimize pipelines for mapping and visualization tasks. Collaborate with data engineers and backend developers to integrate geospatial data pipelines into cloud platforms (e.g., Snowflake and Databricks). Work with GIS tools (QGIS, ArcGIS) to analyse and visualize large-scale spatial data. Integrate mapping tools with cloud platforms and automate data workflows for geospatial analytics. Stay up-to-date with the latest tools and technologies in cloud data platforms, geospatial mapping, and spatial data indexing. Preferred candidate profile Bachelors degree in computer science, Geographic Information Systems (GIS), Data Engineering, or a related field. Proficiency in mapping libraries/APIs: Google Maps, Mapbox, Leaflet, OpenLayers, or similar. Experience with H3 Index for spatial indexing, analysis, and partitioning. Strong hands-on experience with Snowflake and Databricks for managing, analysing, and processing large-scale geospatial data. Proficiency with DuckDB for real-time geospatial querying. Strong programming skills in TypeScript and modern web technologies (HTML, CSS, JavaScript). Experience working with geospatial data formats: GeoJSON, KML, Shapefiles, and GPX. Familiarity with GIS software (QGIS, ArcGIS) for spatial data analysis. Solid understanding of SQL and experience optimizing spatial queries. Ability to collaborate in a cross-functional team and integrate solutions with cloud services. Perks and benefits Training in Databricks. Support on certifications. Hands-on AWS, Azure, and GCP. Support you on any cloud certification. Build leadership skills.
Not specified
INR 30.0 - 35.0 Lacs P.A.
Work from Office
Full Time
Role & responsibilities Data Pipeline Management: Design, develop, and maintain robust data pipelines that facilitate efficient data processing, transformation, and loading. Optimize these processes for performance and scalability. ETL Processes: Architect and implement Extract, Transform, Load (ETL) processes to integrate and transform raw data from various sources into meaningful, usable formats for data analytics. Data Quality Assurance: Implement data quality checks and validation processes to ensure the integrity and consistency of data. Identify and resolve data anomalies and discrepancies. Scalability and Performance: Continuously monitor and enhance data processing systems to ensure they meet the growing needs of the organization. Optimize data architectures for speed and efficiency. Innovation and Improvement: Stay updated with the latest industry trends and technologies. Proactively suggest improvements to data systems and processes to enhance efficiency and effectiveness. And make sure that do not impact the pipeline and other technical processes and conflicts should not be occurred. Documentation and Compliance: Maintain comprehensive documentation of data processes, architectures, and workflows. Ensure compliance with data governance and security policies.Preferred candidate profile Data Processing Tools: Proficiency in data processing tools such as PySpark and Pandas for large-scale data manipulation and analysis. Databricks: Knowledge of Databricks for collaborative data engineering, data processing. Automation and Templates: Experience with Python for scripting templates and automation scripts. Cloud Platforms: Experience with cloud platforms (e.g., AWS, Azure, GCP) for data storage and processing. Problem-Solving: Strong analytical skills with the ability to diagnose issues and develop effective solutions quickly. Continuous Learning: Enthusiastic about learning new technologies and staying updated with industry trends to drive innovation in data engineering practices. Adaptability: Flexible and adaptable to changing project requirements and priorities. Capable of handling multiple tasks and projects simultaneously. Team Collaboration: Ability to work collaboratively in a team environment and contribute to cross-functional projects. Communication: Excellent verbal and written communication skills to effectively convey technical information to non-technical stakeholders. Any Graduate with Computer Science Background. Mandatory certifications in Python and SQL. Additional certifications in cloud technologies or data engineering are preferred. Good to have certification on Databricks and Data Engineering Concepts 3 to 5 years of experience in data engineering, with a strong focus on data pipeline development, ETL processes, data warehousing, and templates. Experience in working with cloud-based data systems. Perks and benefits Training in Databricks. Support on certifications. Hands-on AWS, Azure, and GCP. Support you on any cloud certification. Build leadership skills.
Not specified
INR 30.0 - 35.0 Lacs P.A.
Hybrid
Full Time
Role & responsibilities 3+ years of experience in Spark, Databricks, Hadoop, Data and ML Engineering. 3+ Years on experience in designing architectures using AWS cloud services & Databricks. Architecture, design and build Big Data Platform (Data Lake / Data Warehouse / Lake house) using Databricks services and integrating with wider AWS cloud services. Knowledge & experience in infrastructure as code and CI/CD pipeline to build and deploy data platform tech stack and solution. Hands-on spark experience in supporting and developing Data Engineering (ETL/ELT) and Machine learning (ML) solutions using Python, Spark, Scala or R languages. Distributed system fundamentals and optimising Spark distributed computing. Experience in setting up batch and streams data pipeline using Databricks DLT, jobs and streams. Understand the concepts and principles of data modelling, Database, tables and can produce, maintain, and update relevant data models across multiple subject areas. Design, build and test medium to complex or large-scale data pipelines (ETL/ELT) based on feeds from multiple systems using a range of different storage technologies and/or access methods, implement data quality validation and to create repeatable and reusable pipelines. Experience in designing metadata repositories, understanding range of metadata tools and technologies to implement metadata repositories and working with metadata. Understand the concepts of build automation, implementing automation pipelines to build, test and deploy changes to higher environments. Define and execute test cases, scripts and understand the role of testing and how it works.Preferred candidate profile Big Data technologies Databricks, Spark, Hadoop, EMR or Hortonworks. Solid hands-on experience in programming languages Python, Spark, SQL, Spark SQL, Spark Streaming, Hive and Presto Experience in different Databricks components and API like notebooks, jobs, DLT, interactive and jobs cluster, SQL warehouse, policies, secrets, dbfs, Hive Metastore, Glue Metastore, Unity Catalog and ML Flow. Knowledge and experience in AWS Lambda, VPC, S3, EC2, API Gateway, IAM users, roles & policies, Cognito, Application Load Balancer, Glue, Redshift, Spectrum, Athena and Kinesis. Experience in using source control tools like git, bit bucket or AWS code commit and automation tools like Jenkins, AWS Code build and Code deploy. Hands-on experience in terraform and Databricks API to automate infrastructure stack. Experience in implementing CI/CD pipeline and ML Ops pipeline using Git, Git actions or Jenkins. Experience in delivering project artifacts like design documents, test cases, traceability matrix and low-level design documents. Build references architectures, how-tos, and demo applications for customers. Ready to complete certifications.Perks and benefits Hands-on AWS, Azure, and GCP. Support you on any certification. Build leadership skills. Medical Insurance coverage for self and family. Provident Fund
Not specified
INR 15.0 - 27.5 Lacs P.A.
Hybrid
Full Time
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
Chrome Extension
Netpyx
0 Jobs | Mohali
Visa
0 Jobs | Foster City,California
Datakrew
5 Jobs | Singapore
Squareboat
5 Jobs | Gurgaon,Haryana
Agilite Group Technologies
2 Jobs | Charlotte,North Carolina
RED Global
30 Jobs | San Francisco
Soft Suave
10 Jobs | Ellicott City,Maryland
Voing Technologies
7 Jobs | Ahmedabad,India
Appexigo Technologies
1 Jobs | Noida,U.P.
C4Scale
1 Jobs | Chennai,Tamil Nadu