TU Wien CAIML

iCAIML Retreat 2025

The first retreat of iCAIML Doctoral College took place on September 23-24 in St. Lorenzen, Styria.

The first iCAIML Doctoral College retreat took place on September 23-24 in St. Lorenzen, Styria.

About the Event

Over two days, the retreat provided iCAIML researchers with the opportunity to step back from their daily work and focus on skills essential for an academic and professional career. Beyond disciplinary expertise, the program emphasized proposal writing, career planning, science communication, and the ability to situate scientific work in wider social and regulatory contexts. Alongside talks, shared meals, discussions, and a visit to Vorau Abbey (Stift Vorau) created space for reflection and exchange in a supportive setting.

Zeynep Aksöz Balzar opened with a critical look at how AI shapes architecture and design. Esther Heid shared practical strategies for writing successful funding proposals. Nermina Mumic and Peter Filzmoser offered perspectives on building careers in both start-ups and academia. Julian Dorffner examined the EU’s regulatory framework on AI law and its unresolved challenges. Niki Popper and Florian Aigner emphasized the importance of clear and accessible science communication. Together, these contributions strengthened participants’ soft skills that are crucial for thriving in both academia and beyond. Finally, the researchers and the iCAIML team agreed upon the next steps for a successful interdisciplinary exchange.

Talks and Speakers

AI Overload

Zeynep Aksöz Balzar

This lecture explores the physical embodiments of artificial intelligence and their transformative role in architecture and design. While AI has long been understood as an abstract, computational system, recent developments foreground its potential as an agent embedded in material, spatial, and ecological contexts. The talk will situate where we currently stand with AI—between generative design tools, machine learning systems, and large-scale language models—while critically questioning their cultural, social, and spatial implications.

From the perspective of architecture, AI is no longer confined to optimization or automation; it becomes a collaborative partner that can shape emergent forms, multisensory interactions, and adaptive environments. By presenting case studies from research and practice, the lecture will propose new perspectives on how architecture can engage with AI: as a medium for co-creation, as a catalyst for ecological awareness, and as a driver of speculative futures where the boundaries between human, non-human, and artificial agents are redefined.

Zeynep Aksöz Balzar

Zeynep Aksöz Balzar is a Turkish architect, creative coder, and arts-based researcher whose transdisciplinary practice explores the intersection of algorithmic systems, technologically mediated interaction, and AI embodiment. Her work focuses on reciprocal relationships between human, non-human, and artificial agents, developing emergent design approaches through playful, collaborative, and process-driven methodologies.

She is Assistant Professor at TU Wien, Research Associate and Lecturer at the University of Applied Arts Vienna, and Faculty at IAAC Barcelona. She co-leads the Vienna-based design and research collective Open Fields, where she engages with ecology, interaction, and machine learning in both artistic and architectural contexts. Zeynep has exhibited and presented internationally, including at the Venice Architecture Biennale, London Design Festival, BOZAR Brussels, Kuandu Museum of Art Taipei, and the Estonian Academy of Arts. Her creative and scientific work has been published in Fabricate, Design Transactions, and in proceedings such as ACADIA and the Design Modeling Symposium.

She is the recipient of the Austrian Chamber of Architects’ Research Prize, the Margarete Schütte-Lihotzky Stipendium, AI in Arts funding, and is a former Marie Curie Fellow. She is currently co-leading funded projects such as Vibrant Fields (FWF PEEK) and Morphology of Sound (INTRA). Educated in Vienna, London, and Madrid, she holds a Doctoral degree from the University of Applied Arts Vienna, an MArch from the Architectural Association’s Emergent Technologies and Design program, and an MSc from TU Wien.

Proposal writing: How to sell your project idea

Esther Heid

Turning a research idea into a funded project requires more than excellent science. It also requires the ability to convince reviewers that your vision is both exciting and feasible. In this talk, I will share practical strategies for writing strong proposals, drawing on my own experiences with programs such as the ERC Starting Grant, FWF START, FFG BRIDGE industry projects, FWF Schrödinger postdoc fellowships, and ÖAW DOC. We will discuss how to craft a compelling story that highlights novelty, impact, and feasibility, how to tailor applications to different funding schemes, and how to present your project in a way that captures the reviewers’ attention. Beyond writing, I will also address how to deal with possible rejection letters and how to adapt when a funded project takes unexpected turns. The aim is to provide PhD students with concrete tools and an honest perspective on proposal writing, helping them sell their project ideas effectively while building resilience for the ups and downs of an academic career.

Esther Heid

Esther Heid received her Ph.D. in Theoretical Chemistry from the University of Vienna in 2019. She subsequently held postdoctoral positions at MIT (2020–2022) and TU Wien (2022–2024), where she advanced her work at the interface of chemistry and machine learning. In 2024, she was awarded a prestigious ERC Starting Grant, as well as a FWF START grant, which supported her promotion to tenure-track Assistant Professor at TU Wien. Her research group focuses on deep learning approaches for chemical reactions, with an emphasis on graph-convolutional neural networks, graph transformers, and language transformers, as well as developing new generative approaches to model reaction pathways.

How to build/plan your Network/Career in Academics, and from the point of view of a Start-up

Nermina Mumic and Peter Filzmoser

Nermina Mumic: The streaming market has grown rapidly in recent years and, especially due to the COVID-19 pandemic, has become one of the most important sources of income for artists. Until now, however, musicians had to trust that streaming services such as Spotify, YouTube, or Apple Music would provide the correct figures for calculating payouts at the end of the month. With Legitary, we have developed an ML solution that can verify these streaming figures and uncover incorrect settlements. We have analyzed more than 1.2 trillion streams from across the industry and found that, on average, about seven percent of revenues are not paid out correctly to music right holders. In order to get data access, we will discuss how to establish contacts with industry, and what other activities can be useful for a young Start-up.

Peter Filzmoser: An academic career is associated with a strong scientific track record, which is usually linked to published research results, teaching experience, project acquisition, experience abroad, community service, and various other factors. Such achievements are difficult to attain as an individual researcher; it is more realistic to achieve them within a team or network. We will discuss ideas on how to join a network, what a suitable network might look like, and what advantages a network could offer. However, even the best possible network is no guarantee of an academic career, as there are many other important factors.

Nermina Mumic

Nermina Mumic completed her Master’s degree in Statistics and Mathematical Methods in Economics at TU Wien. After collecting experience at a consultancy company, she founded the Vienna-based start-up Legitary, specialized on analyzing music streaming reports. Legitary is an award winning TU Wien Spin-Off, having two patents on ML technology, and is very visible in the international music industry. Nermina has received many awards, including the Women’s Award from TU Wien in 2023.

Peter Filzmoser

Peter Filzmoser is a Professor of Statistics at TU Wien, Austria. He received his Ph.D. and postdoctoral lecture qualification from the same university. He was a Visiting Professor at Toulouse, France and Belarus. Furthermore, he has authored more than 200 research articles and several R packages and is a co-author of a book on multivariate methods in chemometrics (CRC Press, 2009), on analyzing environmental data (Wiley, 2008), and on compositional data analysis (Springer, 2018). Peter Filzmoser is a Professor of Statistics at TU Wien, Austria. He received his Ph.D. and postdoctoral lecture qualification from the same university. He was a Visiting Professor at Toulouse, France, and Belarus. Furthermore, he has authored more than 200 research articles and several R packages and is a co-author of a book on multivariate methods in chemometrics (CRC Press, 2009), on analyzing environmental data (Wiley, 2008), and on compositional data analysis (Springer, 2018).

AI and the Law: Examining the EU’s Regulatory Framework

Julian Dorffner

The rapid advancement of artificial intelligence (AI) poses a challenge for lawmakers who must adapt legal frameworks to this transformative technology. This talk examines three pillars of the EU’s regulatory landscape for AI and their interplay: the AI Act, liability rules, and the GDPR. As part of this, a brief recap of the legislative evolution will provide context for the current state of regulation.

The presentation will highlight the EU’s admirable regulatory achievements, but also examine some of its shortcomings: A year after the AI Act, the first “comprehensive” AI regulation, has entered into force, many central questions regarding the scope of its provisions – including the scope and definition of so-called “AI systems” – remain. In the field of liability law, the EU’s efforts to adopt AI-specific rules have so far failed. This leaves more broadly defined legal frameworks, such as the new Product Liability Directive, to fill the emerging accountability gap – often inadequately. At the same time, the rise of AI exposes limitations in the GDPR’s regulatory framework. Almost a decade after its adoption, personal data protection laws appear in need of adjustments to better address the technological advancement of AI.

Taken together, these selected developments illustrate both the progress and the challenges of regulating AI. While they represent only a fraction of the issues surrounding the field of “AI law”, they capture some of its most pressing points of discussion and will provide an introductory overview of the subject matter.

Julian Dorffner

Julian Dorffner is a PhD candidate and university assistant at the University of Vienna’s Faculty of Law, working under the supervision of Professor Christiane Wendehorst. He holds a law degree from the University of Vienna and spent a semester studying law at King’s College London. Together with his team he won the 30th William C. Vis International Commercial Arbitration Moot, the world’s largest student competition in the field of private law. His research examines the intersection of law and technology, with a particular focus on private law and European Union regulation. During several internships at international law firms, he helped advise technology companies and startups on complex legal issues.

Communicability, Simulation and AI – Transparency, Trust and Troubles & Making Science Heard: Media, Messages, and Mistakes to Avoid

Niki Popper and Florian Aigner

Niki Popper: Human-made systems are becoming increasingly important, but also increasingly opaque, while the costs required to manage them are rising dramatically. Both the measurement of efficiency and effectiveness on the one hand, and the assessment of the impact of new technologies on people and their life on the other, are becoming ever more difficult—yet remain essential. Although the quantity and quality of available data continue to grow and thereby facilitate description and analysis - methods, models, and technologies are needed to enable analysis, planning, and control. Above all, the question arises: How can these models and decision based on them be communicated?

Since the beginning of the computer age, there have been approaches aimed at transparent communicability. Jay Forrester provided with System Dynamics a comprehensible modelling framework for Donella Meadows’ et al. work “The Limits to Growth”. In visualization, the development spans from William Playfair’s foundations of statistical diagrams through Ivan Sutherland’s graphical user interfaces to today’s augmented reality applications. Osman Balci, in his Life Cycle of a Simulation Study, refers to the “Communicative Model,” which - alongside the “Investigation of Solution Techniques” – is necessary so that a model can be “judged or compared against the system and the study objectives by more than one human.” This leads to current developments that - as described by Adelinde Uhrmacher et al. - methodologically link model communication with context, composition, and automation. Concepts such as Serious Games, originally developed by Clark Abt, and Participatory Modelling have long been established means of fostering not only comprehensibility but also the involvement of diverse actors, thereby increasing acceptance.

All these approaches already consider various aspects of the interplay between the different phases of model development and application and the ultimate communicability of results. Communicating such models is not just about outcomes. Depending on the research question, the explainability of model functionality, the reproducibility and transparency of processes, the way model development was carried out, how validation was performed, and why a particular modelling methodology was chosen in the first place may all be crucial. Moreover, the question arises as to whether and how changes within a system can be represented in the model to analyse their effects. This creates a direct connection to current AI research: AI systems can process large datasets efficiently, detect patterns, and deliver powerful forecasts, but they often face persistent challenges with respect to explainability, interpretability, and trustworthiness. Black-box predictions pose the risk that decision-makers may not accept them - or may even misinterpret them - even when they are technically sound.

In this context, simulation can assume a complementary role. In contrast to purely data-driven approaches, simulations are based on explicit causal assumptions that can be transparently communicated, validated, and explored through scenarios or “what-if” analyses. In this way, AI results can be verified and embedded into causally consistent narratives. Simulations can thus function as consistency checks for AI outputs, reveal emergent behaviour, and contribute to explainability. Conversely, AI can accelerate simulations by providing surrogate models, heuristic optimization methods, or fast approximations for high-dimensional or time-critical problems.

The Covid-19 pandemic has shown that the success of scientific models in practice depends not only on their accuracy, but above all on their interpretability and credibility. A closer integration of AI and simulation therefore offers a promising path: AI provides speed and scalability, while simulation ensures causality, transparency, and trustworthiness. Together, these approaches can transform computational models into reliable decision-support tools for complex and high-risk environments.

Florian Aigner: Science doesn’t speak for itself-it needs a voice. Whether through newspapers, Twitter, or TikTok, research is constantly finding its way into the public conversation - sometimes accurately, sometimes not. In this interactive session, we’ll look at how science communication works, how the university’s communications team can help you, and how young researchers can get involved. After a short introduction, the floor is yours: let’s discuss your experiences, doubts, and ambitions when it comes to making science visible.

Niki Popper

Nikolas (Niki) Popper is Senior Scientist at TU Wien, Austria, director of DEXHELPP, research centre for decision support in health systems and key researcher at VRVis – Competence Center for Visual Computing. He studied Technical Mathematics in Vienna, Barcelona and Moscow, Idaho and earned his PhD at TU Wien on comparative modelling and hybrid simulation. His research focuses on theory and application of modelling and simulation of complex socio-technical systems, with particular emphasis on integrating approaches from data science, simulation and AI. He co-founded dwh GmbH Simulation Services.

Dr. Popper has authored over 100 peer-reviewed publications and delivered more than 100 invited talks and keynote lectures internationally. His achievements have been recognized with the Gerhard Bruckmann Prize of the Austrian Statistical Society and the award “Austrian of the Year 2021” in Research. During the COVID-19 pandemic, he played a key role in national modelling efforts and served on multiple scientific advisory boards, contributing to evidence-based decision-making. In addition to his scientific work, he is active in science communication, with more than 1,000 media appearances, as well as collaborations at the interface of science and the arts. He is committed to teaching, supervision of early-career researchers, and building bridges between research, policy, and society.

Florian Aigner

Florian Aigner finished his PhD in quantum physics in 2010. After that, he switched from science to science communication. Today, he works as a journalist and science communicator, he writes books, news articles, does radio programmes and TV projects.