‹ Back
Location
: Fully remote (EMEA timezone)Start date: ASAPLanguages: Fluent English requiredIndustry: AI Infrastructure / Cloud / European Deep-Tech SaaSAbout the RolePragmatike is recruiting on behalf of a European deep-tech company building AI-native cloud services and distributed AI infrastructure.
Their platform delivers managed inference, LLM-as-a-Service, enterprise RAG solutions, and custom B2B model deployments, supporting real-world production workloads across text, image, and multimodal AI systems.
We are seeking an AI Engineer to join a highly technical AI Services team building production-grade GenAI and AI infrastructure products.
This role is focused on model optimization, inference performance, AI system design, and enterprise AI deployments, working at the intersection of software engineering, machine learning, and cloud-native infrastructure.
You will play a key role in building scalable AI services that power real customer workloads, with strong ownership, technical autonomy, and direct impact on production systems.
What Youll DoOptimize model inference using advanced techniques including quantization (GPTQ, AWQ, GGUF), distillation, pruning, and speculative decodingBuild and integrate GenAI capabilities beyond LLMs, including computer vision, image generation (Stable Diffusion, FLUX), and multimodal modelsDesign and implement pre-processing and post-processing pipelines, including prompt engineering, structured output parsing, guardrails, and context managementBuild RAG systems, embedding pipelines, and semantic retrieval architectures for enterprise AI applicationsDrive model selection, benchmarking, and cost/performance trade-off decisions across AI servicesBuild evaluation frameworks to measure model quality, latency, reliability, and production performanceBuild production AI systems that go beyond experimentation and notebooks, focusing on scalability, reliability, and maintainabilityCollaborate closely with platform, infrastructure, and product teams to deliver integrated AI servicesContribute to AI platform architecture and long-term technical directionParticipate in the full lifecycle of AI systems, from research and prototyping to production deployment and operationsWhat Were Looking For3+ years of software engineering experience with at least 1+ year focused on AI/ML systemsHands-on experience with model optimization techniques including quantization, distillation, and fine-tuningStrong Python skills and experience with modern ML frameworks (PyTorch, Transformers, diffusers)Solid understanding of modern LLM architectures, inference patterns, and GenAI eco
System
sExperience building real production AI applications (not just research prototypes or notebooks)Strong engineering mindset with focus on reliability, scalability, and maintainabilityAbility to move fast while maintaining production-grade quality standardsOwnership mentality and comfort operating in early-stage, fast-moving environments
Bonus Points
Experience with computer vision, image/video generation, or multimodal AI systemsBackground in embedding models, vector databases, and semantic retrieval at scaleFamiliarity with structured generation, function calling, agent frameworks, or orchestration systemsExperience with distributed systems, cloud-native platforms, or AI infrastructureExposure to cost-optimization strategies for large-scale AI inference systemsWhy This Role Will Pivot Your CareerFully remote work from anywhere (EMEA timezone preferred)Equipment budget to build your ideal technical workspaceCompany offsites to connect with a highly technical international teamCareer growth within a scaling engineering and AI organizationWork on cutting-edge distributed systems, AI infrastructure, and production GenAI platformsWhy Join UsOur client is redefining cloud infrastructure through decentralization and advanced automation, offering a sovereign, energy-efficient alternative to hyperscale cloud providers.
Youll join a deeply technical environment where architecture matters, performance is critical, and your decisions will directly shape the evolution of a complex, ambitious platform operating at the intersection of distributed systems, networking, and cloud infrastructure. Pragmatike is committed to a fair, transparent, and inclusive recruitment process.
We do not discriminate based on age, disability, gender, gender identity or expression, marital or civil partner status, pregnancy or maternity, race, religion or belief, sex, or sexual orientation.
In accordance with GDPR, your personal data will be processed lawfully, fairly, and securely, and used solely for recruitment purposes, including sharing it with our client(s) for employment consideration.
You may request access, correction, or deletion of your data at any time.
We are committed to maintaining the confidentiality and security of your information throughout the recruitment process.
AI Engineer
JOB SUMMARY
Roles
Skills & Technologies
Job details
Location
: Fully remote (EMEA timezone)Start date: ASAPLanguages: Fluent English requiredIndustry: AI Infrastructure / Cloud / European Deep-Tech SaaSAbout the RolePragmatike is recruiting on behalf of a European deep-tech company building AI-native cloud services and distributed AI infrastructure.
Their platform delivers managed inference, LLM-as-a-Service, enterprise RAG solutions, and custom B2B model deployments, supporting real-world production workloads across text, image, and multimodal AI systems.
We are seeking an AI Engineer to join a highly technical AI Services team building production-grade GenAI and AI infrastructure products.
This role is focused on model optimization, inference performance, AI system design, and enterprise AI deployments, working at the intersection of software engineering, machine learning, and cloud-native infrastructure.
You will play a key role in building scalable AI services that power real customer workloads, with strong ownership, technical autonomy, and direct impact on production systems.
What Youll DoOptimize model inference using advanced techniques including quantization (GPTQ, AWQ, GGUF), distillation, pruning, and speculative decodingBuild and integrate GenAI capabilities beyond LLMs, including computer vision, image generation (Stable Diffusion, FLUX), and multimodal modelsDesign and implement pre-processing and post-processing pipelines, including prompt engineering, structured output parsing, guardrails, and context managementBuild RAG systems, embedding pipelines, and semantic retrieval architectures for enterprise AI applicationsDrive model selection, benchmarking, and cost/performance trade-off decisions across AI servicesBuild evaluation frameworks to measure model quality, latency, reliability, and production performanceBuild production AI systems that go beyond experimentation and notebooks, focusing on scalability, reliability, and maintainabilityCollaborate closely with platform, infrastructure, and product teams to deliver integrated AI servicesContribute to AI platform architecture and long-term technical directionParticipate in the full lifecycle of AI systems, from research and prototyping to production deployment and operationsWhat Were Looking For3+ years of software engineering experience with at least 1+ year focused on AI/ML systemsHands-on experience with model optimization techniques including quantization, distillation, and fine-tuningStrong Python skills and experience with modern ML frameworks (PyTorch, Transformers, diffusers)Solid understanding of modern LLM architectures, inference patterns, and GenAI eco
System
sExperience building real production AI applications (not just research prototypes or notebooks)Strong engineering mindset with focus on reliability, scalability, and maintainabilityAbility to move fast while maintaining production-grade quality standardsOwnership mentality and comfort operating in early-stage, fast-moving environments
Bonus Points
Experience with computer vision, image/video generation, or multimodal AI systemsBackground in embedding models, vector databases, and semantic retrieval at scaleFamiliarity with structured generation, function calling, agent frameworks, or orchestration systemsExperience with distributed systems, cloud-native platforms, or AI infrastructureExposure to cost-optimization strategies for large-scale AI inference systemsWhy This Role Will Pivot Your CareerFully remote work from anywhere (EMEA timezone preferred)Equipment budget to build your ideal technical workspaceCompany offsites to connect with a highly technical international teamCareer growth within a scaling engineering and AI organizationWork on cutting-edge distributed systems, AI infrastructure, and production GenAI platformsWhy Join UsOur client is redefining cloud infrastructure through decentralization and advanced automation, offering a sovereign, energy-efficient alternative to hyperscale cloud providers.
Youll join a deeply technical environment where architecture matters, performance is critical, and your decisions will directly shape the evolution of a complex, ambitious platform operating at the intersection of distributed systems, networking, and cloud infrastructure. Pragmatike is committed to a fair, transparent, and inclusive recruitment process.
We do not discriminate based on age, disability, gender, gender identity or expression, marital or civil partner status, pregnancy or maternity, race, religion or belief, sex, or sexual orientation.
In accordance with GDPR, your personal data will be processed lawfully, fairly, and securely, and used solely for recruitment purposes, including sharing it with our client(s) for employment consideration.
You may request access, correction, or deletion of your data at any time.
We are committed to maintaining the confidentiality and security of your information throughout the recruitment process.
Discover the company
Explore other offers from this company or learn more about Pragmatike.
The company
P
Pragmatike Spain



