Description
Orbital is a physics-grounded AI copilot that operates complex industrial systems such as refineries, upstream assets, and energy-intensive plants. It combines realtime time-series forecasting, physics-based models, and domain-trained language models to deliver interpretable insights, anomaly detection, and optimisation
pathways directly to operations teams.
As a Forward Deployed ML Engineer, your job is to make Orbital’s AI systems work in customer reality. You will deploy, configure, tune, and operationalise our deep learning models inside live industrial environments; spanning cloud, on-premise, hybrid, and air-gapped infrastructure.
This is not a pure research role.
You are not training experimental models in isolation. You are adapting production AI systems to customer data, configuring agents and RAG pipelines, tuning anomaly detection, and ensuring models deliver value in production workflows.
If Research builds the models, you make them work on-site.
Operating Context
Forward Deployed ML Engineers operate in pods of three alongside:
• Full Stack Engineers
• Data Engineers
Each pod delivers 2–3 customer deployments per quarter, owning AI configuration, model tuning, agent orchestration, and inference reliability in production.
Requirements- MSc in Computer Science, Machine Learning, Data Science, or related field, or equivalent practical experience.
- Strong proficiency in Python and deep learning frameworks (PyTorch preferred).
- Solid software engineering background; designing and debugging distributed systems.
- Experience building and running Dockerised microservices, ideally with Kubernetes/EKS.
- LLM API integrations (OpenAI, Claude, Gemini), FastAPI for ML services and REST inference APIs
- Familiarity with message brokers (Kafka, RabbitMQ, or similar).
- Comfort working in hybrid cloud/on-prem deployments (AWS, Databricks, or industrial environments).
- Exposure to time-series or industrial data (historians, IoT, SCADA/DCS logs) is a plus.
- Domain experience working as a data scientist in oil and gas or energy is a plus.
- Ability to work in forward-deployed settings, collaborating directly with customers.
- Comfortable in customer-facing technical roles.
- Able to operate in forward-deployed environments.
- Strong troubleshooting capability in production AI systems
What Success Looks Like• AI systems are deployed and running in customer environments.
• Models are tuned to customer data and delivering operational value.
• Anomalies and predictions are trusted by engineers.
• Multi-agent copilots function reliably in production workflows.
• RAG systems retrieve accurate, domain-relevant insights.
• Inference pipelines run with high uptime and low latency.