Amartha is embarking on an exciting new journey and is in need of experienced engineers to work with senior management, existing engineers, and product in shaping the next wave of innovative product offerings, ensuring Amartha leapfrogs into the next phase of its journey!
Successful candidates will need strong communication skills as they will be actively involved in discussions and instrumental in bridging the gap between product and technology, as well as coaching, mentoring, and guiding junior engineers in best practices/solving technical challenges.
Job Description
As a Data Engineer, you will implement and data pipelines while demonstrating expertise in a number of areas including cloud computing, database design and development.
Responsibilities
- Build, test, and maintain optimal data pipeline architecture
- Build the infrastructure necessary for optimal extraction, transformation, and loading of data from a variety of sources
- Assemble large & complex data sets to meet business demands
- Work with stakeholders including the Executive, Product, Data and Design teams to assist with data-related technical issues and support their data infrastructure needs.
- Advanced working SQL knowledge and experience working with relational databases, query authoring (SQL) as well as working familiarity with a variety of databases.
- Strong analytic skills related to working with structured & unstructured datasets.
- Experience performing root cause analysis on internal and external data and processes to answer specific business questions and identify opportunities for improvement.
- Build processes supporting data transformation, data structures, metadata, dependency and workload management.
- Experience building and optimizing data pipelines, architectures and data sets.
- Experience using the following software/tools:
- Experience with programming languanges: Python/Java/Go
- Experience with relational SQL and NoSQL databases (PostgreSQL, MongoDB)
- Experience with data pipeline and workflow management tools: prefer Airflow, nice to have: Azkaban, Luigi etc
- Experience with Cloud services (GCP & AWS)
Preferred:
- Experience with big data tools: Hadoop, Spark, Kafka, etc.
- Experience with stream-processing systems: Storm, Spark-Streaming, etc.
Silakan referensi bahwa Anda menemukan lowongan kerja ini
di Fungsi.id, ini membantu kami mendapatkan lebih banyak
lowongan kerja berkualitas di sini, terima kasih!