Building the next generation of ScoreData’s Data pipeline architecture.  This role requires evaluation of technology and building/extending the data pipeline stack. You will also build plugins (and crawlers) to get data from various sources. You will also process customer data and prepare it for data scientist to build the models. You will be responsible for the entire lifecycle of the project, from conception to full implementation and supporting the product support.

You should be energetic, love to experiment, be passionate about data and focus on customer success.  

  • Build the next generation of ScoreFast Data pipeline architecture
  • Work on specific customer data analytics project and perform data analytics.
  • Build plugins crawlers to extract data from external sources
  • Responsible for data pipeline systems operation.
  • Prototyping new technologies for integration.
  • Requires a B.S. degree or higher in the area of data science, computer science, statistics, mathematics, physics, engineering, operations research from a reputed university.
  • 3+ years experience building maintainable large-scale data pipeline architecture
  • Working experience in Spark both batch and real-time mode. Experience with other real-time streaming platforms such as Kafka and Storm is a plus.
  • Experience in building ETL application for big data iusing Hadoop in cloud (AWS)
  • Knowledge of Hive, HBase, Zookeper, Oozie is a plus
  • Knowledge of Github, JIRA, Confluence, jenkins, Docker/Kubernetes is a plus
  • Languages: Python, Java, Golang

To apply please send your resume and a covering letter to: careers@scoredata.com

Apply now


Thank you for submitting your application. We will contact you shortly!

Employment Type
Job Location
Palo Alto
Date posted
January 16, 2018
PDF Export