Brainberg
AI in the Wild: Streaming, Serving, and Scaling Production Systems
AI Integration & ApplicationMeetupFree

AI in the Wild: Streaming, Serving, and Scaling Production Systems

Wed 20 May · 15:30
Amsterdam, 🇳🇱 Netherlands
< 50 attendees
Adyen · Simon Carmiggeltstraat 6-50

About this event

⚠️ Important Note:
PyData Amsterdam is transitioning to Luma.

  • Please subscribe to PyData Amsterdam on Luma: https://luma.com/pydataamsterdam
  • And register at the Luma link for this event: https://luma.com/j9kvw2dk

---------------------------------------------------------------------------------

Join our May’s edition meetup on Wednesday, May 20, 2026 at Adyen Amsterdam office, for an evening dedicated to the engineering of high-scale, real-world AI systems. We will explore how Apache Flink powers resilient, real-time features within Adyen's payment engine and dive into the architecture of building a self-hosted AI stack designed for absolute data sovereignty and cost control.

This is a great opportunity for data scientists, ML engineers, and software engineers to network and learn about cutting-edge MLOps practices at a global payments company and within a fully self-hosted AI stack.
Note: To ensure a smooth check-in process, please register using your full name and bring a valid ID card to the venue.

Agenda:
17:30 - 18:25: Walk-in with drinks & food 🍺🍕
18:25 - 18:30: Adyen’s intro
18:30 - 19:15: Talk 1: Accelerating and protecting shoppers payments with Apache Flink at Adyen
19:15 - 19:45: Break
19:45 - 20:30: Talk 2: From Notebook to Living Room: Building a Self-Hosted AI Stack That Replaces SaaS
20:30 - 21:00: Networking / drinks

Talk 1: Accelerating and protecting shoppers payments with Apache Flink at Adyen, by Vitalii Zhebrakovskyi and Matteo Tonelli

Data Scientists at Adyen have for a long time developed and constantly improved ML models that are utilised at various stages of payment processing. Would it be fraud detection, payment schemas routing optimization, bot attacks detection or payouts, ML has consistently proven to deliver better results than traditional rule-based manually configured tools and has become a core driver of growth for the business.

With growing adoption of ML, the demand for model features has been steadily on the rise. Having these features custom built and refreshed daily stopped being sufficient, easy to build and realtime features that could be used reliably at global scale became a necessity to improve the product.

In this session the team behind Adyen Feature Platform will present how Apache Flink is used in the core of Adyen's payment processing engine. On top of Adyen’s Global Managed Flink Platform, to provide resilience in the face of disaster, so that ML models can rely on their features being fresh.

Vitalii Zhebrakovskyi is Senior Software Engineer at Adyen working in fraud prevention and MLOps fields. He's the major contributor to Adyen's Feature Platform with primary focus on stream processing with Apache Flink. His career spans over more than 20 years and a wide variety of domains and languages with lately focusing on Java and Payment Processing field.

Matteo Tonelli is a Software Engineer at Adyen focusing on streaming. In his journey at Adyen, he has been mostly working on enabling machine learning scientists to train, deploy and monitor their models, providing them with data their model needs, when they need it.

Talk 2: From Notebook to Living Room: Building a Self-Hosted AI Stack That Replaces SaaS, by Calogero Zarbo

Most data scientists interact with AI through managed notebooks and APIs, but what happens when you want full control over your models, your data sovereignty, and your costs? In this talk, I'll walk through the architecture of a self-hosted local AI stack running on consumer hardware (RTX 5090), accessible securely via Tailscale VPN, and serving multiple concurrent users.

We'll cover the core engineering decisions behind ditching Ollama for direct llama.cpp integration to eliminate abstraction taxes, and building a custom FastAPI scheduler for dynamic model switching between heavy reasoning models (DeepSeek-R1 70B) and agile assistants (Qwen3.6).

Crucially, we'll dive into architectural pivots born from real bottlenecks: how our initial "always-loaded" CPU-only utility server for tool routing created blocking latency that killed chat responsiveness, forcing a redesign toward async decoupling via Redpanda/Kafka and targeted API offloads. We'll also touch on the broader infrastructure layer—systemd service chaining, Docker networking, and replacing SaaS project management with self-hosted alternatives like Huly, Joplin, and WorkAdventure.

This started as a hobby, but quite rapidly grew into an application of advanced engineering principles to personal infrastructure. You'll leave with concrete patterns for local model serving, knowing when to kill components that look good on paper but fail in practice, and how to build resilient AI tooling you actually control.

Calogero, in short Cal, is the Head of Advanced Computing at Sandbox Wealth, where he leads AI, ML, and quantum computing strategy for fintech applications. With over 15 years across computational biology, e-learning, and data science consulting, Cal has shipped production systems for clients including Satispay (where he pioneered Italy's first quantum-classical hybrid optimization in fintech using D-Wave) and Artefact. He co-founded Algoritmica.AI - an AI venture for credit risk assessment that was acquired in 2021 - and currently lectures on Quantum Machine Learning at Ca' Foscari Challenge School in Venice. When not building enterprise AI systems, Cal architects his self-hosted home lab to replace every SaaS subscription he can find.

DIRECTIONS
Address:
Simon Carmiggeltstraat 6-50,
1011 DJ Amsterdam

The event will be held at the Adyen SC office event space, which is just a seven-minute walk from Amsterdam Central Station. Please enter through the main entrance (the revolving doors) and collect your visitor badge at the reception desk.
Note: To ensure a smooth check-in process, please register using your full name and bring a valid ID card to the venue.

Note: To ensure a smooth check-in process, please register using your full name and bring a valid ID card to the venue.

Source: meetup