**Company**
TimeZest facilitates easy scheduling of appointments between end users and IT service providers. Since its inception in 2019, TimeZest has rapidly grown, becoming a vital software tool for over 1,000 businesses. Operating as a fully remote company from the start, our team of 22 spans Europe, Asia, and the United States. We focus on creating customer-loved software in an efficient, relaxed work environment. As a bootstrapped and profitable company, we minimize processes to maintain this efficiency.
**Role Overview**
As we enhance our platform with AI-driven features, we are seeking a Senior Software Engineer to lead the integration of large language models (LLMs). This role involves designing and building core backend systems to enable intelligent ticket categorization, document indexing, vector search, and prompt orchestration, while also mentoring the team in this growing domain. Strong Rails experience and expertise in LLMs are essential to develop a scalable foundation for these new AI capabilities.
**What You’ll Do**
– Design and develop scalable backend services for LLM workflows, including vector embedding, retrieval pipelines, and API integration.
– Build data pipelines for processing and indexing ticket content, documents, and structured data.
– Guide architectural decisions related to LLM orchestration, prompt engineering, context management, and API reliability.
– Collaborate cross-functionally with product, engineering, and design teams to advance ideas from prototype to production.
– Implement backend best practices in code quality, testing, observability, and performance.
– Mentor junior and mid-level engineers through code reviews, design sessions, and hands-on collaboration.
**What We’re Looking For**
– Expertise in relational databases (PostgreSQL), data modeling, indexing, caching, and performance tuning.
– Experience with integrating LLM APIs (e.g., OpenAI, Anthropic) into production software, preferably within Rails applications.
– Strong system-design skills for search workloads, scalable services, and data pipelines.
**Our Stack**
Our technology stack includes Ruby on Rails 7, PostgreSQL, Sidekiq, TypeScript, and React, deployed on Heroku.
**Nice to Have**
– Experience with deploying or self-hosting open-source LLMs (e.g., Mistral, LLaMA) using modern inference frameworks.
**What We Offer**
This is a full-time, remote position with a monthly salary. The role requires 2-3 hours of overlap with the engineering team, but can be based anywhere globally. We look forward to hearing from you!