Location: Remote
Job Type: Full-Time
Experience Level: 10+ Years
Department: Data Engineering & Analytics
Reports To: Technical Lead
Job Summary:
We are seeking a highly skilled and experienced Azure Data Engineer (Freelance) with over 10 years of experience in designing and implementing scalable data solutions. The ideal candidate will have deep expertise in Azure cloud services, ETL processes, data warehousing, and big data technologies. This role demands a strong foundation in data modeling, SQL scripting, and performance optimization. You will play a critical role in building high-performance data pipelines and collaborating with cross-functional teams to deliver business-driven data solutions.
Key Responsibilities:
- Design and implement scalable data architectures and solutions using Azure services such as Azure Data Lake, Azure Synapse Analytics, Azure SQL Database, and Azure Data Factory.
- Develop, optimize, and maintain robust ETL/ELT pipelines for structured and unstructured data.
- Work with large-scale data warehousing and big data technologies such as Apache Spark and Hadoop.
- Perform data modeling, indexing, and performance tuning to support high-throughput analytics and reporting solutions.
- Implement data governance, quality, and security best practices across the data lifecycle.
- Collaborate closely with business analysts, data scientists, and application developers to meet data and analytics requirements.
- Automate workflows and deploy data solutions using CI/CD and DevOps practices.
- Monitor data infrastructure, troubleshoot issues, and drive continuous improvement.
Required Skills & Qualifications:
- Bachelor’s or Master’s degree in Computer Science, Data Engineering, or a related field.
- 10+ years of experience in data engineering, including architecture, development, and deployment.
- Strong expertise in Azure data services, including Data Lake, Data Factory, Synapse, and Azure SQL.
- Experience with big data tools like Apache Spark, Hadoop, or Databricks.
- Proficient in SQL scripting, data modeling, and performance tuning techniques.
- Experience building scalable, fault-tolerant data pipelines for real-time and batch processing.
- Familiarity with DevOps and CI/CD pipelines in a cloud environment.
- Excellent problem-solving skills, attention to detail, and strong communication abilities.
- Ability to work independently and as part of cross-functional teams in an Agile environment.