We are seeking a skilled Data Engineer with strong expertise in Data Bricks and Delta Live Tables to guide and drive the evolution of our client's data ecosystem.
The ideal candidate show strong technical leadership and own hands-on execution, leading the design, migration, and implementation of robust data solutions while mentoring team members and collaborating with stakeholders to achieve enterprise-wide analytics goals.
Key Responsibilities
Collaborate in defining the overall architecture of the solution, with experience in designing, building, and maintaining reusable data products using Data Bricks, Delta Live Tables (DLT), Pyspark , and SQL.
Migration of existing data pipelines to modern frameworks and ensure scalability and efficiency.
Develop the data infrastructure, pipeline architecture, and integration solutions while actively contributing to hands-on implementation.
Build and maintain scalable, efficient data processing pipelines and solutions for data-driven applications.
Monitor and ensure adherence to data security, privacy regulations, and compliance standards.
Troubleshoot and resolve complex data-related challenges and incidents in a timely manner.
Stay at the forefront of emerging trends and technologies in data engineering and advocate for their integration when relevant.
Required Skills & Qualifications
Proven expertise in Data Bricks, Delta Live Tables, SQL, and Pyspark for processing and managing large data volumes.
Strong experience in designing and implementing dimensional models and medallion architecture.
Strong experience in designing and migrating existing databricks workspaces and models to Unity Catalog enabled workspaces.
Strong Experience creating and managing group Access Control Lists (ACL) and compute and governance policies in Databricks Unity Catalog.
Hands-on experience with modern data pipeline tools (Tools such as AWS Glue, Azure Data Factory, Fivetran/DBT etc) and modern cloud platforms ( Databricks).
Knowledge of cloud data lakes (e.g., Data Bricks Delta Lake, Azure Storage and/or AWS S3 ).
Demonstrated experience applying DevOps principles using Version Control and CICD for IaC and code base deployments(e.g. AzureDevops, Git , CI/CD) to data engineering projects.
Strong experience with batch and streaming data processing techniques and file compactization strategies.
Nice-to-Have Skills
Familiarity with architectural best practices for building data lakes.
Hands on with additional Azure Services including, Message Queues, Service Bus, Cloud Storage, Virtual Cloud, Serverless Compute, CloudSQL, OOP Languages and Frameworks
Experience with BI tools (e.g., Power BI, Tableau) and deploying data models.
Experience with Databricks Unity Catalog, i.e. configuring and managing data governance and access controls in a Delta Lake environment
Soft Skills:
Ability to identify, troubleshoot, and resolve complex data issues effectively.
Strong teamwork, communication skills and intellectual curiosity to work collaboratively and effectively with cross-functional teams.
Commitment to delivering high-quality, accurate, and reliable data products solutions.
Willingness to embrace new tools, technologies, and methodologies.
Innovative thinker with a proactive approach to overcoming challenges.