Sr. Database Engineer (with Hadoop)

About the Company

CatchProbe is a global leader in Web Intelligence, OSINT, Threat Intelligence, and Digital Crime Analytics, providing actionable insights through its AI-driven SaaS-based platform. CatchProbe’s solutions help organizations enhance intelligence gathering, prevent threats, and deliver accurate analysis through a centralized interface that integrates data from open, private, and dark web sources. CatchProbe’s cutting-edge platform delivers autonomous intelligence orchestration, profiling, and prevention, ensuring a safer digital landscape for businesses worldwide.

About the Role

atchProbe is seeking an experienced MongoDB & Big Data Developer to manage and optimize MongoDB, Elastic, and Hadoop clusters. The role involves administering, maintaining, and troubleshooting large-scale data systems. The ideal candidate will be skilled in MongoDB installations, upgrades, and supporting MongoDB clusters, as well as handling performance analysis and capacity planning. Experience with MongoDB tools, sharded clusters, and cloud technologies is essential. Additionally, expertise in Docker and containerized environments for MongoDB administration is a must. Familiarity with Hadoop systems, Kafka, and cloud platforms like AWS, Azure, or GCP will be a plus.

Key Responsibilities

  • Manage MongoDB clusters, perform upgrades, and provide ongoing support.

  • Perform capacity planning, monitor performance, and optimize resource usage.

  • Collaborate with application teams for MongoDB capacity planning and new application onboarding.

  • Work with NoSQL technologies and maintain high availability and fault-tolerant systems.

  • Support sharded MongoDB clusters and handle upgrades and configuration maintenance.

  • Utilize MongoDB tools (mongodump, mongoexport, mongorestore, etc.) for database management.

  • Assist application teams in identifying and resolving MongoDB performance bottlenecks.

  • Work with cloud-based MongoDB solutions and share relevant metrics using Cloud Manager.

  • Develop automation solutions for script execution, ad-hoc report generation, and system maintenance.

  • Manage data workflows in Hadoop ecosystems (HDFS, Hive, Spark, etc.) and support data storage solutions.

Required Skills

  • Strong experience with MongoDB, including installation, upgrades, and sharded clusters.

  • Proficiency in MongoDB shell scripting and working with tools like mongodump, mongoexport, etc.

  • Experience with cloud technologies and containerization tools like Docker.

  • Strong knowledge of NoSQL databases, including Elasticsearch and Hadoop systems.

  • Proficient in Python and Unix Shell scripting for automation and performance optimization.

  • Experience working in public cloud environments (AWS, GCP, Azure).

  • Strong communication and collaboration skills, with the ability to work in fast-paced environments.

Preferred Skills

  • Knowledge of Hadoop ecosystems (HDFS, Yarn, Spark, Hive).

  • Familiarity with Kafka for real-time data streaming.

  • Experience with AI/ML technologies.

  • Familiarity with database performance tuning and query optimization.

Explore the complete job description by visiting the official website provided:

Copyright © 2025 hadoop-jobs. All Rights Reserved.