1019 Data Engineer

Location: Bradenton, FL, United States
Date Posted: 04-12-2018
Our client is one of the world's leading information technology companies. Through its Global Network Delivery Model™, Innovation Network, and Solution Accelerators, client focuses on helping global organizations address their business challenges effectively. A part of India's largest industrial conglomerate, with over 130,000 of the world's best trained IT consultants in 50 countries. The company is listed on the National Stock Exchange and Bombay Stock Exchange in India. Our client delivers a level of certainty that no other firm can match—to their clients and employees.

Location: Bradenton, FL
Type of hire: Full Time

Job Description: 

The Data Engineer will be responsible for building, maintaining data pipelines and data products to ingest, process large volume of structured / unstructured data from various sources. The Data engineer will work on analyzing the data needs, migrating the data into an Enterprise data lake, build data products and reports. The role requires experience with building real time and batch based ETL pipelines with strong understanding of big data technologies and distributed processing frameworks with.
Skill Needs
  • Expertise working with large scale distributed systems (Hadoop, Spark). 
    • Strong understanding of the big data cluster, and its architecture
  • Experience building and optimizing big data ETL pipelines.
  • Advanced programming skills with Python, Java, Scala
  • Good knowledge of spark internals and performance tuning of spark jobs.
  • Strong SQL skills and is comfortable operating with relational data models and structure. 
  • Capable of accessing data via a variety of API/RESTful services.
  • Experience with messaging systems like Kafka.
  • Experience with No SQL databases. Neo4j, mongo, etc.
  • Expertise with Continuous Integration/Continuous Delivery workflows and supporting applications.
  • Exposure to cloud environments and architectures. (preferably Azure)
  • Ability to work collaboratively with other teams. 
  • Experience with containerization using tools such as Docker. 
  • Strong knowledge of Linux and Bash.  Can interact with the OS at the command line and create shell scripts to automate workflows. 
  • Advanced understanding of software development and collaboration, including experience with tools such as Git.
  • Excellent written and verbal communication skills, comfortable presenting in front of non-technical audiences.
Essential Responsibilities include but not limited to:
  • Design and develop ETL workflows to migrate data from varied data sources including SQL Server, Netezza, Kafka etc. in batch and real-time.
  • Develop checks and balances to ensure integrity of the ingested data.
  • Design and Develop Spark jobs as per requirements for data processing needs.
  • Work with Analysts and Data Scientists to assist them in building scalable data products.
  • Designs systems, alerts, and dashboards to monitor data products in production
Renuka Krishnaswamy
Technical Recruiter

Desk: 408-800-4331 (PST);
Email: renuka@reqroute.com
Website: http://www.reqrouteinc.com/careers

Companies across U.S. have engaged ReqRoute, Inc to deliver skilled, dedicated IT professionals. Recruiting is our passion and we support Fortune 1000 companies with their hiring needs. We always seek to deliver competitive and sought-after career opportunities to our potential consultants and employees. We invite you to review the position requirements and apply today if your skills match our needs.  
ReqRoute, Inc is an Equal Opportunity Employer. Qualified applicants will receive consideration for employment without regard to race, color, religion, sex, age, disability, military status, national origin or any other characteristic protected under federal, state, or applicable local law. (www.reqroute.com)
this job portal is powered by CATS