Data & Infra Engineer -3

Bengaluru, Karnataka, India | Technology | Full-time

Apply

 

About Exotel

 

Exotel is one of Asia’s largest and most trusted customer engagement platforms. From voice to SMS, WhatsApp to AI-led contact centre intelligence, we help businesses deliver seamless, secure, and scalable conversations with their customers. As we grow, our focus remains on customer centricity, operational excellence, and smart automation to power the next generation of experiences.

 

Platform Engineering @ Tech @ Exotel

Platform engineering group is responsible for the distributed system (cloud) infrastructure (on which the rest of the exotel microservices are developed and deployed), as well as the data infrastructure. The team’s deliverables significantly influence the reliability (resilience, uptime, accuracy, security, etc.), usability, scalability and optimality of the overall Exotel stack. The deliverables improve engineering and business productivity by abstracting the distributed system complexities and data management overheads.

Some of the key responsibilities of the team include: 1) Enhancing the platform by exploring and adopting new technologies (E.g. Orchestration engines, serverless, big data)  and optimizing the architecture/implementation 2) Ensuring SLAs are met by monitoring and optimising the deployments

 

Job Role

 

Role of a Software Engineer 3 - Data Engineering within the team includes

  • Lead projects pertaining to data infrastructure development: Data pipelines and data analytics platform, reporting frameworks, distributed databases and Queues, etc.

  • Exploration and adaptation of technologies for big data management, data processing and visualisation frameworks

  • Consult with other project teams w.r.t data design and modelling

  • Mentor juniors in the team 

A lot of focus on what you do will be outside of just adding features and will be related to pushing a distributed system to its limits - you will be constantly thinking of "how do I scale out my cluster to twice its size with 60 seconds?", "How do I increase the platform uptime from 99.95 to 99.99?", "How to shave off a few extra milliseconds in response times?".

 

What does it take?

 

We are looking for candidates with a strong understanding of computer/distributed systems and strong programming skills. We want people who love designing and engineering distributed systems (which is a lot more than programming) 

 

Must-haves

  • Experience developing a few of the following: data pipelines, data APIs, data connectors, and reporting frameworks.

  • Experience with Hadoop ecosystem, ETL tools or BI tools

  • Experience working with juniors and leading project teams to deliver critical software solutions

  • Experience working with distributed databases (Few of MySQL, Aerospike, Elastic search, Redis, etc.)

  • A strong understanding of data structures and algorithms

  • Experience with a few programming languages: Java / Go 

  • Strong in Computer Science fundamentals and strong exploratory skills for exploring new age technologies

  • Willingness to learn new technologies and own up to project execution

  • A "devops" mindset. You own what you will develop.

 

Good-to-haves

  • 4-6 years working with major cloud solutions (AWS, Azure, GCP) desired

  • Experience with Cloud (AWS, GCP, Azure): IAAC technologies (Ansible/Chef or Puppet)

  • Exposure to AI/ML technologies

  • Practical Experience in managing production-scale systems