AI Integrity Engineer
Apply now
Company Overview
Regie.ai is a Series A funded, industry-leading sales automation platform that leverages your CRM and sales engagement data, third-party buyer intent data, and Generative AI to continuously discern the most strategic individuals to engage, precisely when to reach out and with the most tailored message for optimal client engagement.
Whether facilitating sales representatives in their current prospecting efforts or orchestrating end-to-end prospecting processes, Regie.ai facilitates the expansion of your go-to-market strategies while consistently delivering productive meetings and a robust pipeline.
About the Founders
Srinath Sridhar - CEO and Co-founder. Sridhar has a Ph.D. from Carnegie Mellon and also has previously founded companies like Onera. He was among the first 100 engineers at Facebook and has also worked at Google and Bloomreach.
Matt Millen - Co-founder and president. Matt has rich experience in leading multiple companies as a CGO, CRO, and VP of Revenue. He has worked with multiple startups and has helped them build their growth story from the ground up.
Key Responsibilities
- Design and implement comprehensive testing strategies for AI and machine learning models, including validation of algorithms, model accuracy, and performance.
- Develop automated testing frameworks for ML pipelines, ensuring robust continuous integration and delivery of AI models.
- Validate the data preprocessing, feature engineering, and model training processes to identify and resolve potential issues early in the ML lifecycle.
- Conduct exploratory and functional testing to ensure AI/ML models meet business requirements and perform effectively in production environments.
- Monitor the performance of AI/ML models in production, identifying anomalies, inaccuracies, and degradation over time.
- Collaborate with AI Engineers to understand model design, potential edge cases, and system requirements.
- Build performance benchmarks, including stress testing and regression testing for AI models.
- Automate testing processes to ensure scalable and efficient QA across multiple models and iterations.
- Document test results, provide actionable feedback, and support troubleshooting efforts to enhance model quality.
Requirements
- Bachelor's or Master’s degree in Computer Science, Engineering, or a related field.
- 3+ years of experience in QA engineering, with a focus on AI/ML model testing or validation.
- Strong understanding of machine learning concepts, algorithms, and data pipelines.
- Experience with automated testing tools and frameworks for AI/ML systems (e.g., PyTest, TensorFlow Test, etc.).
- Proficiency in Python and knowledge of testing AI/ML models and workflows.
- Familiarity with CI/CD tools for AI model deployment (e.g., Jenkins, GitLab CI).
- Experience with cloud platforms (AWS, GCP, Azure) and containerization (Docker, Kubernetes).
- Familiarity with monitoring tools and techniques for production ML models.
- Excellent debugging, problem-solving, and analytical skills.
- Strong attention to detail and ability to collaborate effectively in a cross-functional environment.
Nice to Have
- Experience with Generative AI or NLP models.
- Knowledge of tools such as MLflow, DVC for model versioning.
- Exposure to sales automation platforms or CRM systems.
Additional Details:
Location - Hybrid (3 days) out of Bangalore(preferred)/ Gurgaon