Senior DevOps Engineer

Cracow/Wroclaw, PL

Working with us, you will have the chance to create software that reaches hundreds of millions of users. Our team develops new deep learning software and applications using technologies such as Azure, Docker, Kubernetes, Hadoop, Kafka, Spark and Ansible. We like to experiment and we have the environment to do it. We work on the borderline of science and technology, creating outstanding solutions. If you are open-minded and willing to constantly develop yourself, Synerise is the place for you.

Apply now

Responsibilities

  • taking part in creating and maintaining the IT infrastructure
  • monitoring, ensuring the safety of the infrastructure and continuity of its actions
  • deploying the application for staging and production environment, buiding Continuous Delivery solution
  • providing tools for developers that help to increase the effectiveness of the team's work
  • participation in designing, implementing and developing solutions for application security
  • ensuring quick detection and handling of any incidents, service of critical situations for platform stability
  • participation in the analysis and prevention of risks and creating policies and procedures related to IT infrastructure
  • cooperation with: developers (devops), Products Owner, Scrum Master and analysts, testers and industry experts
  • supervising external suppliers (virtual servers and SysOps services)

Our expectations

  • very good knowledge of Linux (Ubuntu CENTOS Debian)
  • experience (at least 3 years) in virtualized server infrastructure management (based on the Linux environment)
  • knowledge of MS Azure or AWS (at least in the field of Linux Virtual Servers, disks management and network services)
  • good knowledge of at least one of the following automation tools: Ansible, Puppet, Salt, Terraform
  • knowledge of Kubernetes
  • experience in working with high-traffic and high-availability systems
  • good knowledge of monitoring tools (e.g. ELK, Grafana, InfluxDB, Prometheus)
  • the ability to install and configure applications such as: Apache, Nginx, MySQL/MariaDB, PostgreSQL, HAProxy in Linux environment
  • knowledge of at least one of the following languages: Bash, Python, Go
  • experience in deployment and solving problems the applications in the following languages: Java, Scala, Python, PHP, Go

Nice to have

  • knowledge of the Hadoop ecosystem (HDFS, HBase, Kafka, Spark, ...)
  • familiarity with JIRA, Confluence, Microsoft o365

Our offer

  • opportunity for professional development and a chance to take part in a strategic and ambitious Big Data-related project utilizing new technologies
  • opportunity to learn from and work with seasoned Ops team with extensive professional experience
  • learn Scrum and DevOps philosophies and metodologies
  • hardware and software chosen by you
  • flexible working hours - you can start your work between 7 and 10AM
  • medical health care
  • modern office, free snacks
  • possibility of co-financing participation in branch conferences, internal Tech Talks
  • integration events - we focus on the atmosphere and relations between people
  • daily adrenaline

GDPR Compliance

In your application please include the following statement: "I hereby give my consent to the processing of my personal data which are necessary to the Personal Data Administrator for the purpose of the recruitment process." This consent will allow us to contact you in connection with our recruitment.

If you would like to take part in the other recruitment processes in the future, please agree to participate in future recruitments: "I hereby give my consent to the processing of my personal data which are necessary to the Personal Data Administrator for the purpose of future recruitment processes." This consent will allow us to contact you in connection with our recruitment. We will process your data for two years.

Careers

Apply for this job