As a Forward Deployed Engineer at Verda, you’ll work directly with enterprise customers in the UK and Ireland as a hands-on technical partner, embedding closely with their teams to deploy, run, and scale AI workloads on our cloud platform. You’ll operate at the front line of customer engagement, tackling complex infrastructure and AI challenges in real production environments.
You’ll support onboarding, deployments, and ongoing optimization, working side by side with customer engineers to ensure critical workloads run reliably and efficiently. When issues arise, you’ll dive deep—troubleshooting production problems in close collaboration with Verda’s platform, support, and engineering teams.
This role sits at the intersection of customer, product, and engineering. You’ll translate customer requirements and real-world constraints into actionable insights for internal teams, directly influencing how Verda’s AI cloud evolves. It’s a highly hands-on role for engineers who enjoy forward-deployed work, close customer collaboration, and making an immediate impact in a fast-growing AI infrastructure company.
Act as a hands-on technical partner for enterprise customers in the UK and Ireland
Support customer onboarding, deployment, and optimization of the AI cloud platform
Work closely with customer engineering teams to solve complex infrastructure and AI workload challenges
Translate customer requirements into actionable feedback for product and engineering teams
Troubleshoot production issues and support critical workloads in collaboration with support and platform teams
5+ years of experience in software engineering, infrastructure, or solutions engineering roles
Experience with AI/ML workloads, GPU infrastructure, or HPC environments
Strong background in Linux, cloud infrastructure, and distributed systems
Familiarity with PyTorch, TensorFlow, or distributed training frameworks
Prior experience in a forward-deployed or solutions architect role
Hands-on experience with Kubernetes, containers, and modern CI/CD workflows