Enable job alerts via email!

16 years Big Data Architect IT Consultant with Databricks

ZipRecruiter

Washington (District of Columbia)

On-site

USD 120,000 - 170,000

Full time

9 days ago

Boost your interview chances

Create a job specific, tailored resume for higher success rate.

Job summary

A leading company is seeking a Big Data Architect IT Consultant in Washington, D.C. This role entails developing architecture for big data and data lake systems, with responsibilities encompassing project management and technical support. Candidates need significant expertise in cloud platforms and data analytics tools, ideal for those looking to make an impact in the District of Columbia's smart city initiatives.

Qualifications

  • 5+ years experience in Big Data and Data Architecture.
  • Expertise required in cloud platforms, particularly Microsoft Azure.
  • Proven experience with data tools and analytics platforms.

Responsibilities

  • Coordinates IT project management, engineering, and risk management.
  • Develops technical applications and systems to support user needs.
  • Ensures compliance with documented design and procedures.

Skills

Big Data storage and analytics
Data Architecture best practices
Cloud platforms security and storage
Data-centric systems visualization
SQL and NoSQL databases
Data Pipelines
Apache data products
API / Web Services (REST/SOAP)
Real-time data processing
ETL and data processing

Education

Bachelors degree in Information Technology

Tools

Databricks
Microsoft Azure
Tableau
MicroStrategy
Apache Hadoop
Apache Spark
Streamsets
Apache NiFi

Job description

Job DescriptionJob DescriptionRole : Big Data Architect IT Consultant Master
Client : State of DC
Location : Washington,D.C.

Job description

This role will provide expertise to support the development of a Big Data / Data Lake system architecture that supports enterprise data operations for the District of Columbia government, including the Internet of Things (IoT) / Smart City projects, enterprise data warehouse, the open data portal, and data science applications. This is an exciting opportunity to work as a part of a collaborative senior data team supporting DC's Chief Data Officer. This architecture includes an Databricks, Microsoft Azure platform tools (including Data Lake, Synapse), Apache platform tools (including Hadoop, Hive, Impala, Spark, Sedona, Airflow) and data pipeline/ETL development tools (including Streamsets, Apache NiFi, Azure Data Factory). The platform will be designed for District wide use and integration with other OCTO Enterprise Data tools such as Esri, Tableau, MicroStrategy, API Gateways, and Oracle databases and integration tools.

Responsibilities:
1. Coordinates IT project management, engineering, maintenance, QA, and risk management.
2. Plans, coordinates, and monitors project activities.
3. Develops technical applications to support users.
4. Develops, implements, maintains and enforces documented standards and procedures for the design, development, installation, modification, and documentation of assigned systems.
5. Provides training for system products and procedures.
6. Performs application upgrades.
7. Performs, monitoring, maintenance, or reporting on real- time databases, real-time network and serial data communications, and real-time graphics and logic applications.
8. Troubleshoots problems.
9. Ensures project life-cycle is in compliance with District standards and procedures.

Minimum Education/Certification Requirements:
Bachelors degree in Information Technology or related field or equivalent experience

Skills

Experience implementing Big Data storage and analytics platforms such as Databricks and Data Lakes
Knowledge of Big Data and Data Architecture and Implementation best practices - 5 Years
Knowledge of architecture and implementation of networking, security and storage on cloud platforms such as Microsoft Azure - 5 Years
Experience with deployment of data tools and storage on cloud platforms such as Microsoft Azure - 5 Years
Knowledge of Data-centric systems for the analysis and visualization of data, such as Tableau, MicroStrategy, ArcGIS, Kibana, Oracle - 10 Years
Experience querying structured and unstructured data sources including SQL and NoSQL databases - 5 Years
Experience modeling and ingesting data into and between various data systems through the use of Data Pipelines - 5 Years
Experience with implementing Apache data products such as Spark, Sedona, Airflow, Atlas, NiFi, Hive, Impala - 5 Years
Experience with API / Web Services (REST/SOAP) - 3 Years
Experience with complex event processing and real-time streaming data - 3 Years
Experience with deployment and management of data science tools and modules such as JupyterHub - 3 Years
Experience with ETL, data processing, analytics using such as Python, Java or R - 3 Years
Experience with Cloudera Data Platform - 3 Years
16+ yrs planning, coordinating, and monitoring project activities - 16 Years
16+ yrs leading projects, ensuring they are in compliance with established standards/procedures - 16 Years
Bachelors degree in IT or related field or equivalent experience
Required
5 Years
Knowledge of Big Data and Data Architecture and Implementation best practices

5 Years

Knowledge of architecture and implementation of networking, security and storage on cloud platforms such as Microsoft Azure - 5 Years

Experience with deployment of data tools and storage on cloud platforms such as Microsoft Azure -5 Years

Knowledge of Data-centric systems for the analysis and visualization of data, such as Tableau, MicroStrategy, ArcGIS, Kibana, Oracle

10 Years
Experience querying structured and unstructured data sources including SQL and NoSQL databases - 5 Years

Experience modeling and ingesting data into and between various data systems through the use of Data Pipelines -5 Years

Experience with implementing Apache data products such as Spark, Sedona, Airflow, Atlas, NiFi, Hive, Impala - 5 Years
Experience with API / Web Services (REST/SOAP) - 3 Years

Experience with complex event processing and real-time streaming data - 3 Years

Experience with deployment and management of data science tools and modules such as JupyterHub - 3 Years
Experience with ETL, data processing, analytics using such as Python, Java or R - 3 Years

Experience with Cloudera Data Platform - 3 Years

16+ yrs planning, coordinating, and monitoring project activities - 16 Years

16+ yrs leading projects, ensuring they are in compliance with established standards/procedures - 16 Years

Bachelors degree in IT or related field or equivalent experience

Required

Get your free, confidential resume review.
or drag and drop a PDF, DOC, DOCX, ODT, or PAGES file up to 5MB.

Similar jobs

16 years Big Data Architect IT Consultant with Databricks

AHU Technologies Inc

Washington

On-site

USD 90,000 - 150,000

30+ days ago