Senior Data Engineer

at

Smaato Inc.

Hamburg, Germany
Full Time
3mo ago

Company Description

Smaato's digital ad tech platform is a completely omnichannel, self-serve monetization solution and ad server. Our controls make monetization simple. Publishers can bring their first-party data and manage all inventory in one place. Marketers get access to the highest-quality inventory so they can reach audiences around the world and on any device. Smaato is headquartered in San Francisco, with additional offices in Hamburg, New York City, Beijing and Singapore. Learn more at smaato.com.

Job Description

The Data Engineering team at Smaato works on a few very interesting technology problems related to big-data, distributed computing, low-latency, analytics, visualization, machine learning, and highly scalable platforms development. We build reliable, peta bytes scale distributed systems using technologies such as Spark, Hadoop, Apache Flink, Airflow, Kafka, Databricks,  and Druid. As part of Smaato, you will work on the applications where all threads come together: Streaming, Processing, Storage, Monitoring. Our ultra-efficient exchange processes more than 150 billion ad requests daily, e.g. whopping 4 trillion plus requests in a month. Every line of code you write matters as it is likely to be executed several billion times a day. We are one of the biggest AWS users with a presence in four different regions.

Smaato’s data platform is a symphony of streams, orchestrations, micro services, schedulers, analytics, and visualization technologies. Platform is supported by polyglot persistence using Druid, DynamoDB, Vertica, MySQL, and bunch of orchestration & streaming frameworks like Airflow, Spark, Flink.

The job will involve constant feature development, performance tuning and platform stability assurance. The mission of our analytics team is “data driven decisions at your fingertips”. You own and provide the system that all business decisions will be based on. Precision and high-quality results are essential in this role. 

Our engineers are passionate problem solvers. Be it Java, Python, Scala, Groovy, or typescripting, we are up for all games.

What You’ll Do

  • Designs, implements, tests, creates release and deploys functionality with no oversight
  • Manages and improves the release and deployment review process without oversight
  • Designs, defines and contributes from large components to large projects
  • Makes and communicates decisions around design, prioritization, debugging, etc.
  • Provides guidance to others on design and implementation of complex projects
  • Tracks down and resolves complex bugs and issues
  • Collaborates appropriately with Product, Engineering leader or Project leader to unblock progress, escalates impediments timely
  • Ensures the quality and maintainability of software in their domain
  • Effectively improves the SLC processes and CI/CD automations
  • Effectively estimates LOEs for self and others, and is able to deliver consistently against commitments

Qualifications

  • 7+ years of experience in Big-data platforms, or distributed system, with deep understanding of Spark, Hadoop, Druid, EKS, and Airflow.
  • Strong exposure to at least one of the programming language – Scala, Java, Python.
  • Exposure to highly scalable, low latency microservices development.
  • Strong exposure to application, enterprise, and microservice design patterns.
  • Strong understanding of computer science fundamentals, algorithms, and data structures.
  • Exposure to AWS, Automation, and DevOps (Kubernetes, Jenkins CICD pipelines).
  • Proven experience in owning the products and driving them end to end, all the way from gathering requirements, development, testing, deployment to ensuring high availability post deployment.
  • Contribute to architectural and coding standards, evolving engineering best practices.
  • You enjoy operating your applications in production and strive to make on-calls obsolete, debugging issues in their live/production/test clusters and implement solutions to restore service with minimal disruption to business. Perform root cause analysis.
  • Exposure to Databricks, R, Delta lake, Redash is a big plus.
Apply for this job

Click on apply will take you to the actual job site or will open email app.

Click above box to copy link
Copied
Get exclusive remote work stories and fresh remote jobs, weekly 👇
View all remote jobs
Onkar By: Onkar