Hadoop Developer

The Dignify Solutions

Objective. Collaborative. Compassion - Trusted IT Staff Augmentation & Solutions Services Partner

About the Company

Dignify Solutions is an IT services provider with 30+ years of experience in staffing, service delivery, and talent acquisition across a wide range of industries, including banking, financial services, manufacturing, retail, energy, and more. The company focuses on delivering top-tier talent, providing strategic solutions to clients, and fostering a culture of excellence, client-centricity, and responsiveness. With a strong U.S.-based leadership team and a focus on quality, Dignify has built a reputation for delivering exceptional results in the staffing and IT services industry.

About the Role

A leading solutions provider is seeking a Senior Hadoop Developer with a strong knowledge of the Hadoop ecosystem, including Cloudera, Hive, Impala, PySpark, Oozie, Kafka, and HBase. The ideal candidate will have at least 10 years of experience and a customer-focused mindset, with the ability to dissect complex technical issues and coordinate platform resources effectively. The role involves working within the Hadoop platform, utilizing BI tools, optimizing ETL processes, and leading teams to drive performance and scalability improvements.

Key Responsibilities

  • Develop, implement, and maintain applications within the Hadoop ecosystem.

  • Work closely with internal teams to resolve technical issues and provide innovative solutions.

  • Lead and organize work for other analysts, ensuring project goals are met.

  • Collaborate with stakeholders to ensure that ETL processes are scalable and performant.

  • Utilize tools like Jira, Confluence, and ITSM/Remedy for project management and change management.

  • Trace application and query logic through Spark and Impala logs.

  • Design and maintain large-scale data processing pipelines.

  • Ensure optimal performance and scalability of data systems and processes.

Required Skills

  • 10+ years of experience with Hadoop ecosystem tools (Cloudera, Hive, Spark, YARN, Kafka, HBase).

  • Strong knowledge of PySpark and BI tools.

  • Experience with ETL processes, focusing on performance and scalability.

  • Ability to work in a fast-paced environment and manage multiple priorities.

  • Familiarity with Agile methodologies (Jira, Confluence).

  • Experience with ITSM/Remedy change management.

  • Strong problem-solving and interpersonal skills.

Desired Skills

  • Expertise in Spark, Hive, YARN, Linux, and Cloudera.

  • Proficient in PySpark and Hadoop-based ETL.

  • Knowledge of Cloud and other Big Data technologies.

Visit the official website below to access the full details of this vacancy:

Copyright © 2025 hadoop-jobs. All Rights Reserved.