Let me tell you what I did
With over 5 years of experience in the AI landscape I have seen quite a lot of different challenges. Each unique, interesting and with their own approach. Let me highlight the most important for you and outline what I did.
PyTorch
~
Recommendation Systems
~
ML Ops
~
AWS • Lambda • ECS • SageMaker
~
GenAI & LLMs
~
Elasticsearch
~
Production ML
~
Vector Databases
~
FastAPI
~
Computer Vision
~
RAG Pipelines
~
Docker • CI/CD
~
PyTorch ~ Recommendation Systems ~ ML Ops ~ AWS • Lambda • ECS • SageMaker ~ GenAI & LLMs ~ Elasticsearch ~ Production ML ~ Vector Databases ~ FastAPI ~ Computer Vision ~ RAG Pipelines ~ Docker • CI/CD ~
Recent roles
-
At STRV, I operated as an end-to-end ML engineer inside product teams. I did not just focus on building models, but was also making sure they shipped, scaled and behaved in production.
Much of my work revolved around search, recommendation, and retrieval systems. I designed hybrid search architectures combining Elasticsearch with vector embeddings, built RAG pipelines backed by vector databases, and implemented reranking layers to improve retrieval quality beyond naive semantic search. These systems powered real consumer applications, where latency and reliability mattered as much as model accuracy.
Beyond retrieval, I worked on recommendation engines using online learning techniques and fine-tuned computer vision models to solve cold-start problems. The common thread was pragmatic ML: tight feedback loops, measurable KPIs, and infrastructure designed for iteration.
I deployed most systems on AWS (Lambda, ECS, ECR, SageMaker, Bedrock), containerized with Docker, and supported by lightweight but effective MLOps practices. When needed, I stepped into data engineering (designing ETL pipelines with dbt and Meltano) ensuring that model performance wasn’t limited by messy upstream data.
This role sharpened my ability to think in systems: retrieval quality, serving latency, monitoring, infrastructure cost and product impact.
Key responsibilities:
Researching and implementing RAGs, chatbots, and rerankers for enhanced retrieval performance.
Developing recommendation engines and computer vision for mobile apps, addressing cold-start issues.
Fine-tuning models for content moderation and audio projects, integrating with frontends and backends.
Managing data engineering, migrations, and MLOps on AWS for production-scale solutions.
-
At STIL, I built an AI system in a medically driven environment with strict budget constraints, meaning no cloud, no overengineered stack but just careful engineering.
The core project was a tremor classification system using smartphone IMU data. I worked with both state-of-the-art time series techniques (MiniRocket) and strong traditional baselines (XGBoost), ensuring that improvements were measurable and clinically meaningful and not just technically impressive.
What made this role unique was its end-to-end ownership. I designed the FastAPI backend, deployed a PostgreSQL database (with indexing strategies and PGVector where needed), containerized everything with Docker Compose, and built CI/CD pipelines that automated testing, deployment, and even model retraining.
Because we ran on our own infrastructure, every architectural decision had cost implications. That forced discipline: efficient queries, lean services, clean abstractions.
It was a role that blended ML research, backend engineering, and DevOps in a context where reliability mattered as much as innovation.
Key responsibilities:
Architectural ownership and technical decision-making.
Production readiness and ML Ops considerations.
Working with ambiguity and complex constraints.
Enabling teams through knowledge sharing.
-
During my time at O2, I developed a customer life-cycle management model using TensorFlow neural networks to categorize customers, and a churn prediction system with XGBoost that achieved 4x better lift than random selection, enabling targeted call center interventions. As part of the PPF group, I extended these models to O2 Slovakia and Telenor in Serbia, Hungary, and Bulgaria. Data preprocessing pipelines were built in PySpark on our Spark cluster, later migrated to Azure, with automation via Jenkins and Azure Pipelines. Development leveraged Azure Databricks and ML Studio for efficient model building and deployment.
Key Responsibilities:
Implementing neural networks and XGBoost for customer modeling and churn prediction.
Expanding AI solutions across international teams in the PPF group.
Building and migrating data pipelines in PySpark for large-scale processing.
Automating workflows with CI/CD tools on on-prem and cloud infrastructure.
Tech that makes me happy
Favourite domains
Recommendation systems
Worked on designing and improving recommendation systems that handle real-world data and constraints. Focused on balancing relevance, performance and maintainability in production environments.
Computer Vision
Developed computer vision solutions for practical applications, from data preprocessing to model deployment. Emphasis was on robustness, performance and fitting models into larger systems.
GenAI applications
Built and integrated GenAI features into existing products, moving beyond prototypes to reliable, production-ready use cases. Paid close attention to evaluation, safety and operational concerns.
ML Ops & productionization
Helped teams take machine learning models from experimentation to stable production systems. Focused on deployment, monitoring, iteration and making ML systems maintainable over time.
Sharing knowledge
I enjoy sharing knowledge and learning from the broader tech community. I’ve spoken at multiple PyCon conferences across Africa, including PyCon Senegambia and PyCon Namibia.
PyCon Namibia 2026
Advanced hybrid search with Elasticsearch
Demonstrated how to combine semantic and exact search in Elasticsearch to handle real-world text at scale — and introduced Osman, an open-source library that simplifies building and extending search pipelines.
PyCon Namibia 2025
Content-based recommendations for startups
Explored how content-based filtering works in practice, why it's often the smartest choice for startups and small teams, and walked through a real-world implementation in a sports social media app.
PyCon Namibia 2024
From notebooks to production ML
Walked through the full lifecycle of taking a machine learning model from experimentation to a reliable, scalable production system.
PyCon Senegambia 2023
Best practices for aspiring ML engineers
Shared my personal best practices for young developers that aspire to move into the machine learning space — focusing on what actually matters in production.

