It’s 2025, AI is still a hot topic and everyone’s talking about Agents. Cool stuff, but join me for a dive on Post-Training. I’ll share my journey of picking a small open model, customizing it for my needs and running it in my cheap smartphone. Joys, frustrations and lots of learning!
In 2025, AI is still evolving rapidly. While closed LLMs are continuously improving, open Small Language Models are emerging as powerful alternatives for specific use cases, consuming only a fraction of the resources.
Working in AI engineering, I often find it refreshing to step away from orchestration and get hands-on with fine-tuning, customizing, and optimizing Small Models. In this talk, I’ll share my journey working with Post-Training Small Language Models, full of joys, frustrations, and many lessons learned.
We’ll see together:
By the end, you’ll learn how to customize Small Language Models for your needs and potentially run them on your smartphone. I’ll also share practical examples from my experience improving open models for the Italian language.
💫 Software Engineer with a passion for Language Models, open source and knowledge sharing.
🔍 Previously at 01S, I specialized in information extraction and retrieval from unstructured documents, making valuable information accessible to Italian citizens.
👨💻 Now at deepset, I contribute to Haystack, an open-source LLM framework and its ecosystem. I enjoy engaging with the community and sharing what I learn and spend time working on.