Big Data Developer

Location: Aurora, CO, United States
Date Posted: 10-18-2018
Job Description:

1. The candidate must have more than 6+ Years of experience in Scala Programming and minimum 4+ Years of experience on Spark Streaming & SQL and kafka
2. Writing high-performance, reliable and maintainable code in scala
3. Good experience to write Data ingestion using Spark SQL to fetch the data from structured storage and store it into Cassandra DB using scala
4. Good experience on handling large datasets using Partitions, Spark in Memory capabilities, Broadcasts in Spark, Effective & efficient Joins, Transformations and other during ingestion process itself.
5. Good Experience in consuming messages from Kafka using Spark Streaming
6. Good Experience in transforming Spark Data Frames into JSON and publish the messages to Kafka Topic
7. Responsible for troubleshooting and resolving the issues in Spark cluster
8. Performance tuning of spark applications for fixing right batch operations and memory tuning.
9. Hands on experience on performance improvements like partitioning and bucketing.
10. Hands on experience on Oozie Installation and Spark Job work flow configurations
11. Good experience on complete end to end code deployment process in Production.
12. Good aptitude in multi-threading and concurrency concepts.
or
this job portal is powered by CATS