Responsibilities:
· Design, build, and
maintain data pipelines and ETL processes.
· Collaborate with data
scientists, analysts, and other stakeholders to understand data
· requirements.
· Ensure data quality,
integrity, and security.
· Optimize data storage
and retrieval for performance and scalability.
· Monitor and
troubleshoot data pipeline issues.
· Stay current with
industry trends and best practices in data engineering.
Requirements
Skills:
· Proficiency in
programming languages such as Python, Java, or Scala.
· Experience with data
processing frameworks like Apache Spark, Hadoop, or Flink.
· Knowledge of SQL and
NoSQL databases.
· Familiarity with cloud platforms
such as AWS, Azure, or Google Cloud.
· Understanding of data
warehousing concepts and technologies.
· Experience with version
control systems (e.g., Git) and CI/CD pipelines.
· Strong problem-solving
skills and attention to detail.
Qualifications:
· Bachelor’s degree in
Computer Science, Data Engineering, or a related field