Enable job alerts via email!
Boost your interview chances
Create a job specific, tailored resume for higher success rate.
A leading company specializing in IT services is seeking an experienced Data Engineer. This role focuses on architecting and managing AWS Big Data products and requires a robust background in Datawarehousing and ETL workflows. Ideal candidates will have a Bachelor or Master’s degree in computer science, along with proficiency in languages such as Java and Python. Experience in the media industry and knowledge of BI tools like MicroStrategy or Tableau is a plus.
We specialize in Staffing, Consulting, Software Development, and Training along with IT services to small to medium size companies. AG's primary objective is to help companies maximize their IT resources and meet the ever-changing IT needs and challenges.
In addition, AG offers enterprise resource planning and enterprise application integration, supply-chain management, e-commerce solutions, and B2B public exchanges and B2B process integration solutions. Our company provides application analysis, design, development and programming, software engineering, systems development, testing, integration, and implementation, and management consulting services to various clients – including governmental agencies and private companies – throughout the United States and India.
We provide these services in multiple computing environments and use technologies such as client/server architecture, object-oriented programming languages and tools, distributed database management systems, state-of-the-art networking, and communications infrastructures. Our honest and realistic approach to recruiting dictates that AG does not entice or lure engineers from their employers. We represent only high caliber technical professionals who have committed to making a change required by career.
Must Have Skills –
• Overall 10+ years’ experience in Datawarehousing related technologies.
• 3+ years architecting and managing AWS Big Data products and services such as EMR, RedShift, Data Pipeline and Kinesis
• 3+ years of working experience with Hadoop-based technologies such as MapReduce, Hive/Pig/Impala and NoSQL Databases
• 3+ years of extensive working knowledge in different programming or scripting languages like Java, Linux, C++, PHP, Ruby, Python and/or R.
• Experience working with unstructured, semi-structured and unstructured data sets including social, weblogs and real-time data feeds
• Proficient in designing efficient and robust ETL/ELT workflows
• Able to tune Big Data solutions to improve performance and end-user experience
• Bachelor’s or Master’s degree in computer science or software engineering
• Knowledge BI and Visualization tools such as MicroStrategy/Tableau is a plus
• Experience in the media industry is a plus
• Must have the legal right to work in the United States