Back to Job Search Results

Sr Big Data Engineer - Data Warehouse (Closed)

Date Posted: Feb 24, 2020

Job #1570694
Permanent
Chicago, Illinois, United States
Professional

Our client, a global technology and data company that builds analytics solutions for the advertising industry, has retained us to recruit a Senior Big Data Engineer – Data Warehouse to join their growing Chicago team/office to help build their next generation data processing platform.  
 
If you’re excited by technology that has the power to handle hundreds of thousands of transactions per second, collect tens of billions of events each day, and evaluate thousands of data-points in real-time all while responding in just a few milliseconds, then this could be a great opportunity/company for you. 
 
What you'll do: • Work on Big Data technologies, lead the design, coding and maintenance of highly scalable backend data processing platform for large throughput • Work on the data modelling for the MPP columnar databases to handle high volume of queries with sub-second response times • Lead the entire software lifecycle including hands-on development, code reviews, testing, continuous integration, continuous deployment and documentation using modern programming languages (such as Java, Scala, Python) • Perform tuning of systems for optimal performance • Mentor junior team members 
 
You should apply if you have most of this: • 5+ years of recent hands-on experience in one or more of the modern programming languages (Java, Scala, Python) • Good understanding of collections, multi-threading, JVM memory model, algorithms, scalability and various tradeoffs in a Big Data setting. • Experience developing and maintaining ETL applications and data pipelines using big data technologies • Strong SQL knowledge (OLAP) and experience working with MPP columnar databases (Vertica, SnowFlake, etc) • Excellent interpersonal and communication skills • Understanding of full software development life cycle, agile development and continuous integration • Excellent interpersonal and communication skills 
 
What puts you over the top: • Data warehouse experience in SnowFlake and experience writing ETL pipelines in SnowFlake • Experience working with AWS technologies such EMR, step functions, data pipeline, CloudFormation, etc. • Experience working with Hadoop MapReduce, Spark, Pig, Hive, etc.