Prompt Engineer
Budapest
Permanent 3-5 years of professional experience Hybrid
About the Role:We are seeking a highly skilled Prompt Engineer / System Engineer to join our multidisciplinary team working at the intersection of artificial intelligence, systems architecture, and infrastructure reliability. This role is ideal for someone with a deep understanding of large language models (LLMs), prompt design, and system engineering principles to help build, optimize, and maintain scalable, AI-driven solutions.Key Responsibilities:Prompt Engineering- Design, test, and refine prompts to optimize performance of large language models (e.g., GPT-4, Claude, LLaMA).
- Develop prompt templates and reusable patterns for various use cases, including classification, summarization, generation, and dialogue.
- Collaborate with product and data teams to understand use case requirements and translate them into effective prompt strategies.
- Evaluate LLM responses and fine-tune prompts to reduce hallucinations, bias, or inappropriate content.
System Engineering- Design and maintain scalable infrastructure for AI services, APIs, and model inference pipelines (on-prem or cloud).
- Automate deployment and monitoring of AI components using CI/CD tools (e.g., Azure DevOps, GitHub Actions, Jenkins).
- Implement robust logging, observability, and performance monitoring for AI applications.
- Ensure system security, data privacy, and compliance with internal and external standards.
Required Qualifications:- Bachelor’s or Master’s degree in Computer Science, Engineering, or a related field.
- 3+ years of experience in system engineering, DevOps, or backend development.
- Experience with AI/LLM platforms like OpenAI, Anthropic, Cohere, or open-source models (e.g., Hugging Face).
- Strong skills in scripting and automation (e.g., Python, Bash).
- Familiarity with containerization (Docker, Kubernetes) and cloud services (Azure, AWS, or GCP).
- Understanding of prompt engineering techniques, LLM limitations, and evaluation metrics.
Preferred Qualifications:- Experience fine-tuning or deploying custom LLMs.
- Knowledge of NLP concepts and transformer architectures.
- Familiarity with RAG (Retrieval-Augmented Generation), vector databases, and embeddings (e.g., FAISS, Qdrant, Pinecone).
- Experience with MLOps pipelines and model lifecycle management.
- Security awareness in AI deployments (e.g., token protection, output filtering).
Why Join Us?- Be part of a cutting-edge AI engineering team driving real-world innovation.
- Work on impactful projects combining infrastructure reliability with next-gen AI capabilities.
- Enjoy a flexible work environment and opportunities for continued learning and growth.