Hope you are doing well. We have an open position for a Data Analyst at Richmond, Virginia. Please go through the below description and let me know your interest. If you are interested, kindly share a copy of your resume along with your rate/salary expectations and the best time to reach you.
Client:
Virginia Elect
Location:
Richmond, VA 23219 (Open for Remote)
Job Id:
762942
Responsibilities:
- Design, build, and maintain ETL/ELT pipelines in enterprise data environments using Azure-native tools.
- Develop and manage data integration solutions using Azure Data Factory, Dataflows, Synapse Pipelines, or equivalent orchestration tools.
- Implement complex data transformation logic and data cleansing workflows using SQL, Python, or PySpark.
- Work with Delta Lake and Azure Data Lake Storage Gen2, handling data in JSON, Parquet, and other modern formats.
- Design and develop modular, reusable, and metadata-driven pipeline components with robust error handling and logging mechanisms.
- Ensure responsible management of sensitive data using data masking, PII handling, and encryption techniques.
- Establish data quality frameworks, including automated validation, reconciliation, and logging methods.
- Apply DevOps and DataOps best practices such as versioning, automated testing, and CI/CD pipelines for data engineering workflows.
- Support data publishing for oversight, regulatory compliance, and open data initiatives.
- Collaborate with stakeholders to leverage public data sources and manage publishing workflows for transparency datasets.
Required Skills & Experience:
- 3+ years of experience in building and maintaining ETL/ELT pipelines using Azure-native tools.
- Hands-on expertise with Azure Data Factory, Dataflows, Synapse Pipelines, or similar orchestration platforms.
- Proficiency in SQL, Python, or PySpark for data transformation and cleansing.
- Experience with Delta Lake, Azure Data Lake Storage Gen2, and working with JSON, Parquet formats.
- Expertise in building modular and reusable pipeline components using metadata-driven approaches and implementing robust error handling.
- Strong understanding of data masking, PII handling, and encryption techniques.
- Proven experience with data quality frameworks, including automated validation and data reconciliation.
- Solid grasp of DevOps/DataOps practices, including version control, testing, and CI/CD for data pipelines.
- Experience supporting data publishing for oversight, regulatory, or open data initiatives.
Highly Desired:
- Familiarity with public data sources, government transparency datasets, and publishing workflows.
- Certifications such as DP-203 (Azure Data Engineer Associate) or Azure Solutions Architect.