This role will be working from Costa Rica for an important and recognized global company from the sports industry, located in USA.
Responsibilities:
- Design and build reusable components, frameworks and libraries at scale to support analytics products.
- Design and implement product features in collaboration with business and Technology stakeholders.
- Identify and solve issues concerning data management to improve data quality.
- Clean, prepare and optimize data for ingestion and consumption.
- Collaborate on the implementation of new data management projects and re-structure of the current data architecture.
- Implement automated workflows and routines using workflow scheduling tools.
- Build continuous integration, test-driven development and production deployment frameworks.
- Collaboratively review design, code, test plans and dataset implementation performed by other data engineers in support of maintaining data engineering standards.
- Analyze and profile data for designing scalable solutions.
- Troubleshoot data issues and perform root cause analysis to proactively resolve product and operational issues.
- Lead and mentor team of data engineers.
- Trouble shoot key issues and ensure team member performance.
Requirements:
Experience:
- Strong understanding of data structures and algorithms.
- Strong understanding of solution and technical design.
- Has a strong problem solving and analytical mindset.
- Able to influence and communicate effectively, both verbally and written, with team members and business stakeholders.
- Able to quickly pick up new programming languages, technologies, and frameworks.
- Advanced experience building cloud scalable, real time and high-performance data lake solutions.
- In-depth understanding of micro service architecture.
- Strong understanding of developing complex data solutions.
- Experience working on end-to-end solution design.
- Able to lead others in solving complex problems by taking a broad perspective to identify innovative solutions.
- Willing to learn new skills and technologies.
- Has a passion for data solutions.
Required and Preferred Skill Sets:
- Bachelor’s degree in IT or related field.
- 8-10 years experience as Data Engineer.
- Advanced english level (C1).
- 3+ years of hands-on expert level experience in AWS – EMR [Hive, Pyspark], S3, Athena or any other equivalent cloud.
- 3+ years of hands-on expert level experience Spark Batch Processing and some familiarity with Spark Structured Streaming.
- 3-4 years’ experience Working experience with Hadoop stack dealing huge volumes of data in a scalable fashion.
- 5+ years of hands-on experience with SQL, ETL, data transformation and analytics functions.
- 5+ years of hands-on Python experience including Batch scripting, data manipulation, distributable packages.
- 5+ years’ experience working with batch orchestration tools such as Apache Airflow or equivalent.
- 5+ years working with code versioning tools such as GitHub or BitBucket; expert level understanding of repo design and best practices.
- 5+ years working with deployment automation tools such as Jenkins and familiarity with containerization concepts such as Docker and Kubernetes.
- 5+ years of hands-on experience designing and building ETL pipelines; expert with data ingest, change data capture, data quality; hand on experience with API development; some exposure to Nifi or Kafka.
- 5+ years designing and developing relational database objects; knowledgeable on logical and physical data modelling concepts; some experience with Snowflake.
- 3+ years of experience supporting Tableau or Cognos use cases; familiarity with tools and capabilities
Familiarity with Agile; working experience preferred.
Apply: https://genpact.avature.net/su/57d1adc4bb266217
Para solicitar este trabajo envía un correo electrónico con tus detalles a maria.fernandez@genpact.com