You understand that great things are accomplished when teams work together.
You’ve made a lot of mistakes, and most importantly, have learned from them.
You are experienced in operating and improving the reliability of data storage and processing systems (relational databases, data warehouses, data lakes and distributed processing systems), including operational optimization (e.g. indexes, query tuning, monitoring).
You have a solid understanding of stream processing and operating streaming solutions (using Kafka/Kinesis or some other solution) and/or CDC workloads.
You have experience in planning, provisioning, scaling and maintaining reliable data processing systems in AWS or GCP (using Terraform/Ansible).
You are comfortable with Python and are familiar with maintaining ETL jobs using Airflow or some other solution.
You are always eager to learn more and love to try out new solutions on your own.
Experience with the AWS data ecosystem is highly appreciated (Amazon Aurora MySQL, DocumentDB/MongoDB, OpenSearch/ElasticSearch, Redshift, Glue/Spark, MSK/Kafka, Kinesis, Debezium, S3/Apache Hudi, MWAA/Airflow)
To apply for this job please visit shiphero.breezy.hr.