‹ Back

Applied AI Engineer - LLM & NLP

JOB SUMMARY

NetherlandsPosted on 1/30/2026

Skills & Technologies

Languages:Python
Cloud/DevOps:AWSDocker
Apply
Sponsored
SwiftPrep Logo

SwiftPrep

Ace your interview at Toku Pte Ltd

Get a tailored interview study plan, cheat-sheet, and find contacts for referrals.

Real interview questions and answers from Glassdoor, Reddit, Blind
Role-specific prep plan and cheatsheet tailored to Toku Pte Ltd
Find insiders for referrals
Get Your Prep Plan
Optimize your resume with Teal - AI-powered resume builder and job tracking tools

Job details

At Toku, we create bespoke cloud communications and customer engagement solutions to reimagine customer experiences for enterprises.

We provide an end-to-end approach to help businesses overcome the complexity of digital transformation and deliver mission-critical CX through cloud communication solutions. Toku combines local strategic consulting expertise, bespoke technology, regional in-country infrastructure, connectivity, and global reach to serve the diverse needs of enterprises operating at scale. Headquartered in Singapore, Toku supports customers across APAC and beyond, with a growing footprint across global markets.

As an Applied AI Engineer at Toku, you will focus on building, improving, and deploying real-world AI capabilities across speech-to-text, chatbots, and large language model–driven features used in production.

This role combines hands-on model development with applied research, where you will evaluate existing approaches, explore new techniques, and translate research insights into practical improvements in live systems.

You will work closely with engineering teams to integrate models into production services while maintaining a strong delivery mindset.

You will thrive in this role if you enjoy balancing deep technical execution with curiosity-driven, applied research that directly shapes product outcomes.

Requirements

What you will be doingApplied AI Model Development:Train, fine-tune, evaluate, and improve NLP, speech-to-text, and LLM-based models used in production environmentsWork hands-on with chatbots, summarisation, and language understanding features, including retrieval-augmented generation (RAG) and vector-based retrieval systemsDesign and run model evaluations, benchmarking existing approaches and validating improvements before deploymentApplied Research Experimentation:Read, assess, and experiment with relevant AI/ML research and emerging techniques, translating promising ideas into practical, production-ready solutionsContribute to prompt design, model optimisation, and iterative experimentation to improve accuracy, latency, and reliability of deployed models

Product

ion Integration Delivery:Integrate models into existing backend services using Python-based APIs, collaborating closely with backend engineersEnsure models are production-ready, maintainable, and resilient when deployed in live customer-facing systemsSupport investigation and resolution of AI-related production issues in collaboration with engineering and platform teamsCollaboration Ownership:Work closely with engineering teams to align AI capabilities with product

requirements

and platform constraintsCommunicate progress, trade-offs, and technical decisions clearly in planning and delivery discussionsWe’d love to hear from you if you haveCore AI LLM Expertise:Strong hands-on experience with LLMs, NLP, or speech technologies, including training, fine-tuning, and evaluating models in real-world or production contextsPractical experience with Python-based AI development (e. g. PyTorch and related eco

System

s)Applied Research Fundamentals:Hands-on experience reading, evaluating, and applying AI/ML research (e. g. papers, benchmarks, emerging techniques) and translating those insights into production-ready model improvementsA strong foundation in AI/ML fundamentals (e. g. mathematics, machine learning concepts, model behaviour and evaluation), typically supported by an academic background in AI, machine learning, computer science, or a closely related field

Product

ion Integration Experience:Experience deploying or supporting AI models in production systems, including exposure to monitoring, iteration, and real-world failure modesAbility to integrate models into existing backend services via Python APIs and work effectively within a microservices-based environmentTools Platform Awareness:Familiarity with retrieval-augmented generation (RAG), embeddings, and vector-based retrieval systemsWorking knowledge of AWS-based environments and AI tooling (e. g. EC2, SageMaker, MLflow, Docker)Ways of Working:A proactive, problem-solving mindset with the ability to identify opportunities for improvement rather than waiting for directionStrong collaboration and communication skills when working with engineers across different disciplines

Location

:This is a remote / hybrid role to be based in either The Netherlands (Rotterdam strongly preferred), Singapore or Hong KongWhat would you get. Training and DevelopmentDiscretionary Yearly Bonus Salary ReviewHealthcare Coverage based on

location

20 days Paid Annual Leave (15 days for Malaysia based roles), plus other leave allowancesToku has been recognised as a LinkedIn Top Startup and by the Financial Times as one of APAC’s Top 500 High Growth Companies.

If you’re looking to be part of a company on a strong growth trajectory while working on meaningful, real-world challenges, we’d love to hear from you.