Brainberg

AI & ML Research Events in Europe

Europe's AI and ML research scene is a long tail of university labs, community research groups, and practitioner-run meetups that together punch well above their weight globally. This page covers the research end of the AI/ML event calendar: deep-learning meetups, computer vision and NLP groups, paper-reading clubs, and research-oriented conferences. It's aimed at ML engineers, applied researchers, and PhD students who care about how the models work, rather than how to integrate them into a product (that's the Applied AI category).

Anchor events include the MLcon series (Berlin, Munich, Amsterdam, London), PyData conferences, and regional deep-learning meetups like the Vienna Deep Learning Meetup. Quantum AI and quantum-computing events sit here too, since the European quantum community is research-heavy and overlaps meaningfully with the ML research crowd. Topics cover training and serving frameworks, fine-tuning technique, evaluation, quantization, model architectures, and the infrastructure that makes experimentation tractable.

Brainberg aggregates these into a single chronological European view. For the deluge of "how to ship a feature with an LLM" events, see the Applied AI category instead.

Upcoming events

AI/ML Research & EngineeringMeetupFreeOnline

May 11 - Best of 3DV 2026

Welcome to the Best of 3DV series, your virtual pass to some of the groundbreaking research, insights, and innovations that defined this year’s conference. Live streaming from the authors to you.

Date, Time and Location

May 11, 2026
9AM Pacific
Online. Register for Zoom!

Navigating a 3D Vision Conference with VLMs and Embeddings

Attending the 3D Vision Conference means confronting 177 accepted papers across 3.5 days, far more than any one person can absorb. Skimming titles the night before isn't enough.

This talk shows how to build a systematic, interactive map of an entire conference using modern open-source tools. We load all 177 papers from 3DV 2026 (full PDF page images plus metadata) into a FiftyOne grouped dataset. We then run three annotation passes using Qwen3.5-9B on each cover page: topic classification, author affiliation extraction, and project page detection. Document embeddings from Jina v4 are computed across all 3,019 page images, pooled to paper-level vectors, and fed into FiftyOne Brain for UMAP visualization, similarity search, representativeness scoring, and uniqueness scoring.

The result is an interactive dataset you can query, filter, and explore in the FiftyOne App. Sort by uniqueness to find distinctive work, filter by topic and sort by representativeness to understand each research area, and cross-reference with scheduling metadata to build a personal agenda.

I demonstrate the end-to-end pipeline and discuss design decisions regarding grouped datasets, reasoning model output parsing, and embedding pooling strategies.

About the Speaker

Harpreet Sahota is a hacker-in-residence and machine learning engineer with a passion for deep learning and generative AI. He’s got a deep interest in VLMs, Visual Agents, Document AI, and Physical AI.

Seeing Through Clutter: Structured 3D Scene Reconstruction via Iterative Object Removal

We present SeeingThroughClutter, a method for reconstructing structured 3D representations from single images by segmenting and modeling objects individually. Prior approaches rely on intermediate tasks such as semantic segmentation and depth estimation, which often underperform in complex scenes, particularly in the presence of occlusion and clutter.

We address this by introducing an iterative object removal and reconstruction pipeline that decomposes complex scenes into a sequence of simpler subtasks. Using VLMs as orchestrators, foreground objects are removed one at a time via detection, segmentation, object removal, and 3D fitting. We show that removing objects allows for cleaner segmentations of subsequent objects, even in highly occluded scenes. Our method requires no task-specific training and benefits directly from ongoing advances in foundation models. We demonstrate state-of-the-art robustness on 3D-Front and ADE20K datasets.

About the Speaker

Rio Aguina-Kang is currently a Machine Learning Engineer at Drafted AI, a startup focused on generative architecture. He has previously worked at Adobe Research, Brown Visual Computing, and the Stanford Institute for Human-Centered Artificial Intelligence. He is broadly interested in building systems that let users generate and control visual content through structured representations that reflect their intent.

Physical Realistic 4D Generation

Generating dynamic 3D content that moves and deforms over time is a key frontier in visual computing, with applications in VR/AR, robotics, and digital humans. In this talk, I present our series of works on physically realistic 4D generation: from neural surface deformation with explicit velocity fields (ICLR 2025) to our 4Deform framework for robust shape interpolation (CVPR 2025). Both methods use implicit neural representations with physically constrained velocity fields that enforce volume preservation, spatial smoothness, and geometric consistency. I will also introduce TwoSquared (3DV 2026, oral), which achieves full 4D generation from just two 2D image pairs — demonstrating a practical path toward controllable, physically plausible 4D content creation.

About the Speaker

⁠Lu Sang is a PhD researcher in Computer Vision at TU Munich (Prof. Daniel Cremers), specializing in 3D/4D reconstruction, neural implicit surfaces, and inverse rendering, with serveral publications at top venues including CVPR, ICLR, and ECCV. She is currently a research intern at Google XR in Zurich. With a strong mathematical foundation and a track record spanning photometric stereo to 4D generation, she brings both theoretical depth and hands-on engineering to cutting-edge visual computing research.

Finding NeMO: A Geometry-Aware Representation of Template Views for Few-Shot Perception

How can we use and perceive objects without training a new model given only a few images? We present NeMO, a novel object representation that allows 6DoF object pose estimation, detection and segmentation given only a hand full of RGB images of an unknown object.

About the Speaker

Sebastian Jung studied physics at the LMU munich. He started his PhD in Computer Science at the german aerospace center (DLR) in 2025 and focuses on object centric few shot perception with a focus on robotic applications. Additionally, he's a student researcher at google, focusing on computer vision algorithms for XR.

Mon 11 May · 16:00< 50
AI/ML Research & EngineeringMeetupFree

Pozvánka na 4. setkání Embedded AI komunity v Brně

Brno, 🇨🇿 Czechia

Zveme vás na jarní edici naší Embedded AI komunity, už počtvrté v Brně. Tentokrát ve společnosti Mycroft Mind, kde se opět zaměříme na praktické využití Embedded a Edge AI v reálných projektech, krátké lightning talky, otevřenou diskusi a sdílení zkušeností z vývoje (u piva).

Program

•Představení komunity a účastníků akce.
(Organizátoři Embedded AI Community z Yunex Traffic)

•Přednáška a lightning talk:

Přednáška: SkyEye – autonomní krátkodobý prediktor výroby FVE
(Martin Tříska, Mycroft Mind)

Představení kompaktního a cenově efektivního prediktoru výroby s integrovanou kamerou a senzory. Ukážeme si reálnou implementaci zpracování obrazových i numerických dat na embedded platformě s využitím Edge AI.

Lightning talk: Jak stavíme a testujeme firmware
– praktický pohled na naše nástroje a workflow
(Tým Bender Robotics)

Stručný pohled na to, jak v Bender Robotics přistupují ke stavbě a testování firmware (používané nástroje, workflow a praktické zkušenosti z vývoje).

Moderovaná diskuse: Využití AI v embedded vývoji

Součástí lightning talku bude také moderovaná diskuse na téma využití AI v embedded vývoji.

Pokud máte kdokoliv vlastní zkušenost, uvítáme zapojení do diskuse, případně se ozvěte předem a domluvíme se na zapojení.

Exkurze do labu Mycroft Mind
Možnost nahlédnout na krátkou exkurzi do laboratorního prostředí Mycroft Mind.

Demonstrátor představuje inteligentní měřicí infrastrukturu nízkonapěťové sítě v laboratorním prostředí. Systém je tvořen pěti elektroměry vybavenými edge modulem, které jsou propojeny simulovanou sítí s dynamicky a vzdáleně konfigurovatelnou topologií.
Uživatel si může zvolit libovolné zapojení mezi jednotlivými elektroměry. Následně je spuštěn algoritmus, který na základě analýzy napěťových měření autonomně detekuje skutečné topologické uspořádání sítě.
Demonstrátor ilustruje praktický use-case pro distributory elektrické energie: automatickou detekci a validaci zapojení nízkonapěťové sítě. Tento přístup umožňuje zvyšovat přehled o síti, zjednodušovat její správu a připravuje základ pro pokročilé funkce energetické inteligence přímo na úrovni měření .

•Networking & diskuse
Celý večer bude prostor pro debaty, sdílení zkušeností a networking v neformální atmosféře u piva/nealka a drobného občerstvení.

Proč přijít?
Chceme vytvořit otevřený prostor pro sdílení zkušeností z oblasti Embedded a Edge AI. Baví nás práce s platformami jako Nvidia Jetson, ARM, Intel i ESP, hrajeme si s embedded OS (Yocto, Ubuntu), ladíme kód v C, C++ i MicroPythonu a sledujeme nové technologie včetně powerline komunikace.

Kapacita akce je omezená. Organizátor si vyhrazuje právo rozhodnout o účasti jednotlivých účastníků v případě překročení kapacity nebo z organizačních důvodů.

Prosíme všechny účastníky, aby při registraci uvedli:
•Jméno a příjmení
•Společnost

Pokud chcete přidat vlastní lightning talk nebo krátké sdílení zkušeností s využitím Embedded / Edge AI při vývoji, dejte nám vědět (ještě na tuto akci nebo akce příští).

Těšíme se na vás!

Embedded AI Community tým

Wed 13 May · 15:00< 50
AI/ML Research & EngineeringMeetupFree

73rd Deep Learning Meetup: Optimizers / Speech Recognition in Air Traffic

Vienna, 🇦🇹 Austria

Hi Deep Learners,

We are happy to announce our next Vienna Deep Learning Meetup on May 19 at Bosch. Our topics this time are: Improved Optimizers for Deep Learning and Automatic Speech Recognition in Air Traffic Management.

***
Agenda:

  • 18:15 Arrival
  • 18:30 Introduction by the meetup organizers
  • Welcome by the host: Bosch
  • 18:45 Talk 1: Momentum, Preconditioning, and Beyond: Practical Advances in Optimization by Ionut-Vlad Modoranu (ISTA)
  • 19:30 Announcements
  • Networking Break
  • 20:00 Talk 2: Cleared for Takeoff: Automatic Speech Recognition in Air Traffic Management by Christian Sobtzick & Sahebeh Dadboud (Frequentis)
  • 20:45 Networking
  • ~21:30 Wrap up & End

***

Talk Details:
--------------
Talk 1: Momentum, Preconditioning, and Beyond: Practical Advances in Optimization
Modern Deep Learning optimization is largely dominated by first-order methods such as Adam, yet their limitations in memory usage, scalability, and curvature awareness motivate the search for improved alternatives. This talk provides a practical perspective on optimization, starting from the roles of momentum and preconditioning and extending to recent advances that go beyond standard adaptive methods.

We present techniques that improve optimization along three key axes: memory, structure, and computational efficiency. In particular, we introduce MicroAdam, a memory-efficient variant of Adam based on compressed optimizer states; Trion, a low-rank method that replaces costly SVD/QR projections with efficient, rank-independent alternatives; and DASH, a GPU-efficient implementation of Shampoo that accelerates second-order preconditioning via improved parallelization and fast matrix inverse root approximations.

Together, these methods illustrate how careful algorithmic and systems-level design can overcome the practical limitations of existing optimizers, offering a path toward scalable, memory-efficient, and high-performance training beyond Adam.

About the speaker:
Ionut-Vlad Modoranu is a PhD student at the Institute of Science and Technology Austria (ISTA), specializing in efficient optimization for Deep Learning. His research focuses on reducing the memory usage and computational cost while maintaining the performance, including the development of practical optimizers for large-scale models, with publications at top tier international conferences.

Talk 2: Cleared for Takeoff: Automatic Speech Recognition in Air Traffic Management
Automatic Speech Recognition (ASR) has made tremendous progress in recent years. However, its deployment in safety‑critical operational environments poses a fundamentally different set of challenges compared to demonstration systems or research prototypes. In this talk, we present how speech‑to‑text models are used in a real Air Traffic Management (ATM) product at Frequentis. We walk through the end‑to‑end lifecycle of an ASR system: how training data is collected and curated, how models are selected and evaluated, and what it takes to deploy and operate them reliably for customers in a regulated, safety‑critical domain. Beyond the models themselves, we discuss practical challenges from bringing ASR from research into production, where reliability and trust matter as much as accuracy.
About the Speakers:
Sahebeh (Saba) Dadboud is an Expert Data Scientist at the Frequentis AI Competence Centre. With over eight years of experience in the tech industry, she specializes in bridging Deep Learning, MLOps, and product development to build trustworthy AI for safety-critical systems. At Frequentis, she plays a pivotal role in driving company-wide AI initiatives, supporting the adoption of advanced technologies in domains where reliability, safety, and strict regulatory constraints are absolutely critical. She holds a Master’s degree in Computational Science from the University of Vienna.
Christian Sobtzick is Audio Expert at Frequentis, leading the HET Audio Competence Centre. He is responsible for improving audio quality across all Frequentis voice communication products, drawing on more than 15 years of professional experience. In addition, he supports product development related to automatic speech recognition and contributes to research projects, such as the FFG “Next Generation Safety” project, which aimed to enable AI‑based automation in Air Traffic Management. His background is the field of acoustics, including electroacoustic transducers, acoustic measurements, and signal processing. He holds an MSc in Electrical Engineering and Audio Engineering from Graz University of Technology and the University of Music and Performing Arts Graz.

We are looking forward to welcoming you at our next meetup.
Your VDLM organizer team.

Tue 19 May · 16:3050–200
AI/ML Research & EngineeringMeetupFree

Revisiting Adam for Streaming Reinforcement Learning

Bucharest, 🇷🇴 Romania

Bucharest Deep Learning is back with another exciting session! Join us for a deep dive into Streaming Reinforcement Learning alongside Florin Gogianu, presenting recent research on making pure sequential learning highly competitive.

The Talk: Revisiting Adam for Streaming Reinforcement Learning

Learning from interactions as soon as observations are perceived, without explicitly storing them, promises simpler, more efficient, and adaptive algorithms. However, for over a decade, Deep RL has relied heavily on memory-intensive replay buffers or parallel sampling to tame learning instability.

This talk takes a step back to investigate the efficacy of established batch-RL algorithms, like DQN and C51, within the pure online streaming setting. Florin will show that when appropriately tuned, standard optimization techniques can be surprisingly effective. The presentation will unpack the "unreasonable effectiveness" of the Adam optimizer , demonstrating that two properties are essential for robust streaming performance: bounded objective derivatives and variance-adjusted weight updates.

Finally, the talk will introduce Adaptive Q(λ) (AQ(λ)). By combining eligibility traces with Adam's variance adaptation mechanism and a bounded error signal, this new algorithm surpasses existing streaming methods and approaches double the human baseline across a subset of 37 Atari games.

Logistics

  • Date & Time: Tuesday, May 26 | 18:30 - 19:30
  • Location: FMI New Building (Politehnica Business Tower)
  • Address: Bulevardul Iuliu Maniu, nr. 15G, Etaj 5, Room 503
Tue 26 May · 15:30< 50
AI/ML Research & EngineeringMeetupFree

#34 AI Series: HuggingFace - G. Channing

Berlin, 🇩🇪 Germany

We are excited to feature Georgia Channing, who is currently leading the AI for Science team at Hugging Face and will discuss "When Silicon Meets Carbon: Bringing Synthetic Biology to Life", lasting approximately 45 minutes. After the talk, seize the opportunity to connect with fellow AI enthusiasts to share ideas and questions while enjoying free drinks and pizza. Door close by 7.15pm, so please come early! Also, "attend"ing (RSVP) here on Meetup is strictly necessary to be guaranteed entry.
Please note that Meetup has recently been quite keen on promoting its Plus program. However, you are not obligated to purchase it, as both our events and the platform remain free.

Who is this event for?
This event is open to everyone interested in state-of-the-art AI research. We especially design it for students, PhD candidates, academic researchers, and industry professionals with a research focus in machine learning.

Abstract: TBA

Bio: TBA

We are BLISS e.V., the AI organization in Berlin that connects like-minded individuals who share great interest and passion for the field of machine learning. This summer 2026, we will, again, host an exciting speaker series on site in Berlin, featuring excellent researchers from cohere, ETH Zürich, University of Oxford, HuggingFace, and Stanford University.
Website: https://bliss.berlin
Youtube: https://www.youtube.com/@bliss.ev.berlin

Disclaimer: By attending this event you agree to be photographed.

Tue 2 Jun · 16:4550–200
AI/ML Research & EngineeringMeetupFreeOnline

Azure Machine Learning Step 5: Deploying & Operating Models

In this fifth session of the Azure Machine Learning series, we’ll take the next critical step in the ML lifecycle: moving trained models into production and keeping them running reliably. Azure Machine Learning provides robust tools for deploying models as scalable endpoints, managing versions, and monitoring performance in real-world environments.

This session focuses on the practical side of deployment and operations (MLOps) within Azure ML. You’ll learn how to take a trained and registered model and turn it into a production-ready service, while also understanding how to manage, monitor, and update that service over time. Whether you’re continuing from Step 4 or already familiar with model training, this session will help you bridge the gap between experimentation and real-world impact.

You’ll learn:

  • How deployment fits into the machine learning lifecycle
  • Options for deploying models in Azure ML (real-time endpoints, batch endpoints)
  • How to create and manage online endpoints using the Studio UI, SDK, and CLI
  • How to package models with environments, scoring scripts, and dependencies
  • Techniques for scaling, versioning, and updating deployments (blue/green strategies)
  • How to monitor model performance, logs, and resource usage
  • Best practices for reliability, cost optimization, and governance
  • How to integrate deployed models into applications and workflows

This session is designed to help you move from “I have a trained model” to “I can deploy and operate it in production with confidence.” If you’re ready to deliver real value from your machine learning solutions and ensure they perform reliably at scale, this is your next step.

Wed 10 Jun · 22:00 – 23:30