Chalmers AI4Science Seminar

Advances in machine learning and AI systems are increasingly influencing how we approach the quantitative sciences, including physics, chemistry, and biology. These opportunities include having machines learn new representations of interactions between particles, how matter transforms in reactions, help us decide what experiment to conduct next or detect emerging phenomena. Sparse data situations remain a significant hurdle in many sciences or situations where common data assumptions do not hold. Consequently, it remains critical to ground our efforts in the millennia of scientific insights embodied in the literature to avoid, in the best case, having machines relearn what we already know. The Chalmers AI4Science is a monthly seminar where we invite early-career researchers to present their work at the interface of machine learning, artificial intelligence, and a scientific discipline. This seminar series aims to provide an international platform at Chalmers for discussions about these topics and strengthen interdisciplinary research involving machine learning and AI at Chalmers. The Chalmers AI4Science seminar is organized by Simon Olsson and Rocío Mercado.

Subscribe to our mailing list for reminders

Download iCalendar

Previous talks:

8 December, 2022 15:30 (local Swedish time)

Unexpected Lessons from Neural Networks Built with Symmetry for Physical Systems

Tess E. Smidt (Massachusetts Institute of Technology)


Atomic systems (molecules, crystals, proteins, etc.) are naturally represented by a set of coordinates in 3D space labeled by atom type. This is a challenging representation to use for machine learning because the coordinates are sensitive to 3D rotations, translations, and inversions (the symmetries of 3D Euclidean space). In this talk I’ll give an overview of Euclidean invariance and equivariance in machine learning for atomic systems. Then, I’ll share some recent applications of these methods on a variety of atomistic modeling tasks (ab initio molecular dynamics, prediction of crystal properties, and scaling of electron density predictions). Finally, I’ll explore open questions in expressivity, data-efficiency, and trainability of methods leveraging invariance and equivariance.

Tess Smidt is an Assistant Professor of Electrical Engineering and Computer Science at MIT. Tess earned her SB in Physics from MIT in 2012 and her PhD in Physics from the University of California, Berkeley in 2018. Her research focuses on machine learning that incorporates physical and geometric constraints, with applications to materials design. Prior to joining the MIT EECS faculty, she was the 2018 Alvarez Postdoctoral Fellow in Computing Sciences at Lawrence Berkeley National Laboratory and a Software Engineering Intern on the Google Accelerated Sciences team where she developed Euclidean symmetry equivariant neural networks which naturally handle 3D geometry and geometric tensor data.

Video Recording (YouTube). Download Slides

11 May, 2023 15:30 (local Swedish time)

Exploiting symmetries in machine learning models

Soledad Villar (Johns Hopkins University)


Any representation of data involves arbitrary investigator choices. Because those choices are external to the data-generating process, each choice leads to an exact symmetry, corresponding to the group of transformations that takes one possible representation to another. These are the passive symmetries; they include coordinate freedom, gauge symmetry and units covariance, all of which have led to important results in physics. Our goal is to understand the implications of passive symmetries for machine learning: Which passive symmetries play a role (e.g., permutation symmetry in graph neural networks)? What are dos and don’ts in machine learning practice? We assay conditions under which passive symmetries can be implemented as group equivariances. We also discuss links to causal modeling, and argue that the implementation of passive symmetries is particularly valuable when the goal of the learning problem is to generalize out of sample.

Soledad Villar is an assistant professor of applied mathematics and statistics at Johns Hopkins University. Currently she is a visiting researcher at Apple Research in Paris. She was born and raised in Montevideo, Uruguay.

Video Recording (YouTube). Download Slides

13 October, 2022 14:00 (local Swedish time)

AI4Science at Microsoft Research

Rianne van den Berg (Microsoft Research)


In July 2022 Microsoft announced a new global team in Microsoft Research, spanning the UK, China and the Netherlands, to focus on AI for science. In September 2022 we announced that we have also opened a new lab in Berlin, Germany. In this talk I will first discuss the research areas that we are currently exploring in AI4Science at Microsoft Research in Cambridge (UK), Amsterdam and in our new lab in Berlin, covering topics such as drug discovery, material generation, neural PDE solvers, electronic structure theory and computational catalysis. Then I will dive a little deeper into our recent work on Clifford Neural layers for PDE modeling. The PDEs of many physical processes describe the evolution of scalar and vector fields. In order to take into account the correlation between these different fields and their internal components, we represent these fields as multivectors, which consist of scalar, vector, as well as higher-order components. Their algebraic properties, such as multiplication, addition and other arithmetic operations can be described by Clifford algebras, which we use to design Clifford convolutions and Clifford Fourier transforms. We empirically evaluate the benefit of Clifford neural layers by replacing convolution and Fourier operations in common neural PDE surrogates by their Clifford counterparts on two-dimensional Navier-Stokes and weather modeling tasks, as well as three-dimensional Maxwell equations. If time permits I will briefly cover very recent work on protein structure prediction and coarse graining molecular dynamics.

Rianne is a Principal Researcher at Microsoft Research Amsterdam, where she works as part of the AI4Science team on the intersection of deep learning and computational chemistry. Her research has spanned a range of topics from generative modeling, variational inference, source compression, graph-structured learning to condensed matter physics. Before joining MSR she was a Research Scientist at Google Brain. she received her PhD in theoretical condensed-matter physics in 2016 at the University of Amsterdam, where she also worked as a postdoctoral researcher as part of the Amsterdam Machine Learning Lab (AMLAB). In 2019 she won the Faculty of Science Lecturer of the Year award at the University of Amsterdam for teaching a machine learning course in the master of AI.

Video Recording (YouTube). Download Slides

9 February, 2023 15:30 (local Swedish time)

Can machine learning replace ADME experiments in drug discovery?

Raquel Rodriguez Perez (Novartis)


Absorption, distribution, metabolism, and excretion (ADME) properties play an important role in the success of drug candidates. Unfavorable pharmacokinetics (PK) can prevent that compounds progress in drug development and early ADME/PK properties’ screening aims at reducing the number of molecules failing in the development process. This talk will focus on how to use machine learning to leverage historical ADME/PK data and make predictions for new compounds. Machine learning models developed for PK property predictions will be presented, as well as some of their applications at NIBR. Such models are applicable to large libraries, virtual compounds, and generative chemistry workflows. Hence, predictions enable early informed decisions and compound prioritization, aiming at reducing late-stage attrition. However, using machine learning-based predictions to support decision-making in a drug discovery project involves important considerations. Current challenges and future directions for improving the use of ADMET models in industry will be discussed.

Dr. Raquel Rodríguez-Pérez is a Principal Scientist at Novartis Institutes for Biomedical Research and works in the Modeling & Simulation Data Science team in the Translational Medicine Department. She develops machine learning models to predict compound properties relevant in pharmacokinetics. She supports drug discovery teams with modeling and data science tools in order to make better and faster decision in lead optimization. Prior to working at Novartis, Raquel obtained her B.Sc. and M.Sc. degrees in Biomedical Engineering from the University of Barcelona and her PhD in Computational Life Sciences from the University of Bonn. She worked on data analysis for bioinformatics applications at the Institute for BioEngineering of Catalonia (IBEC) and did her thesis about machine learning models for interpretable compound activity predictions. Therefore, she has experience with the application of machine learning and deep learning methods in different life sciences problems. She was a Marie Curie fellow and worked at the Computational Chemistry - Data Science group in Boehringer Ingelheim, Germany. She has acted as a mentor of scientists at different careers levels both in academia and industry. Overall, her research interests include bio/cheminformatics, machine learning, and data science for biomedical applications.

Video Recording (YouTube). Download Slides

13 April, 2023 15:30 (local Swedish time)

End-to-end learning and auto-differentiation: forces, uncertainties, observables, trajectories and scales.

Rafael Gomez Bombarelli (Massachusetts Institute of Technology)


Deep learning, and in general, differentiable programming allow expressing many scientific problems as end-to-end learning tasks while retaining some inductive bias. Common themes in scientific machine learning involve learning surrogate functions of expensive simulators, sampling complex distributions directly or time-propagation of known or unknown differential equation systems efficiently.

In this talk, we will analyze our recent work in applying deep learning surrogates and auto-differentiation in molecular simulations. In particular, we will explore active learning of machine learning potentials with differentiable uncertainty; the use of deep neural network generative models to learn reversible coarse-grained representations of atomic systems. Lastly, we will describe the application of differentiable simulations for learning interaction potentials from experimental data and for reaction path finding without prior knowledge of collective variables.

Rafael Gomez-Bombarelli (Rafa) is the Jeffrey Cheah Career Development Professor at MITs Department of Materials Science and Engineering. His works aims to fuse machine learning and atomistic simulations for designing materials and their transformations. By embedding domain expertise and experimental results into their models, alongside physics-based knowledge, the Learning Matter Lab designs materials than can be realized in the lab and scaled to practical applications. Together with experimental collaborators, they develop new practical materials such as heterogeneous thermal catalysts (zeolites), transition metal oxide electrocatalysts, therapeutic peptides, organic electronics for displays, or electrolytes for batteries.

Rafa received BS, MS, and PhD (2011) degrees in chemistry from Universidad de Salamanca (Spain), followed by postdoctoral work at Heriot-Watt (UK) and Harvard Universities, and a stint in industry at Kyulux North America. He has been awarded the Camille and Henry Dreyfus Foundation "Machine Learning in the Chemical Sciences and Engineering Awards" in 2021 and the Google Faculty Research Award in 2019. He was co-founder of Calculario a Harvard spinout company, was Chief Learning Officer of ZebiAI, a drug discovery startup acquired by Relay Therapeutics in 2022 and serves as consultant and scientific advisor to multiple startups

Video Recording (YouTube). Download Slides

10 November, 2022 15:00 (local Swedish time)

Artificial Chemical Intelligence: Integrated Theory, Simulations and AI for Enabling Molecular Discovery

Pratyush Tiwary (University of Maryland, College Park)


The universality of thermodynamics and statistical mechanics has led to a language comprehensible to chemists, physicists, materials scientists, geologists & others, enabling countless scientific discoveries in diverse fields. In the last decade, a new arguably common language that everyone seems to speak but no one quite fully understands, has emerged with the advent of artificial intelligence (AI). It is natural to ask if AI can be integrated with the various theoretical and simulation methods rooted in thermodynamics and statistical mechanics for discoveries that none of these could achieve individually. It is also natural to ask if chemists, who are not fundamentally trained in AI, should trust any of the results obtained using AI or even worse, theory or computer simulations that were guided by AI. In this seminar I will show how such an integration of disciplines can be attained, creating trustable, robust AI frameworks for use by chemists and physical scientists. I will talk about such methods developed by my group using and extending different flavors of AI [1-3]. I will demonstrate the methods on different problems involving protein kinases, riboswitches and amino acid nucleation [4-5], where we predict mechanisms at timescales much longer than milliseconds while keeping all-atom/femtosecond resolution, including the problem of recovering a Boltzmann weighted ensemble of conformations from AlphaFold2 [6]. I will conclude with an outlook for future challenges and opportunities, envisioning a new sub-discipline of “Artificial Chemical Intelligence” where chemistry (both theory and simulations) move hand-in-hand with AI to enable smart molecular discovery.
[1] Wang, Ribeiro, Tiwary. Nature Comm 10, 3573 (2019).
[2] Tsai, Kuo, Tiwary. Nature Comm 11, 5115 (2020).
[3] Wang, Herron, Tiwary. Proc Natl Acad Sci 119, e2203656119 (2022).
[4] Wang, Parmar, Schneekloth, Tiwary. ACS Central Science 8, 741 (2022).
[5] Shekhar, Smith, Seeliger, Tiwary. Angewandte Chemie 61, e202200983 (2022).
[6] Vani, Aranganathan, Tiwary. bioRxiv

Pratyush Tiwary is an Associate Professor at the University of Maryland, College Park in the Department of Chemistry and Biochemistry and the Institute for Physical Science and Technology. He received his undergraduate degree in Metallurgical Engineering from IIT-BHU, PhD in Materials Science from Caltech followed by postdoctoral work at ETH Zurich and Columbia University. His work at the interface of molecular simulations, statistical mechanics and machine learning has been recognized through many awards including Sloan Research Fellowship in Chemistry, NSF CAREER award, NIH Maximizing Investigators’ Research Award and ACS OpenEye Outstanding Junior Faculty Award.

Video Recording (YouTube). Download Slides

8 September, 2022 15:30 (local Swedish time)

Supervised and physics-informed learning in function spaces

Paris Perdikaris (University of Pennsylvania)


While the great success of modern deep learning lies in its ability to approximate maps between finite-dimensional vector spaces, many tasks in science and engineering involve continuous measurements that are functional in nature. For example, in climate modeling one might wish to predict the pressure field over the earth from measurements of the surface air temperature field. The goal is then to learn an operator, between the space of temperature functions to the space of pressure functions. In recent years, operator learning techniques have emerged as a powerful tool for supervised learning in infinite-dimensional function spaces. In this talk we will provide an introduction to this topic, present a general approximation framework for operators, and demonstrate how one can construct deep learning models that can handle functional data. We will see how such tools can help us build neural ODE and PDE solvers that can be trained even in the absence of labeled data, and enable the fast prediction of continuous spatio-temporal fields up to three orders of magnitude faster compared to conventional numerical solvers. We will also discuss key open questions related to generalization, data-efficiency and inductive bias, the resolution of which is critical for the success of AI in science and engineering.

Paris Perdikaris is an Assistant Professor in the Department of Mechanical Engineering and Applied Mechanics at the University of Pennsylvania. He received his PhD in Applied Mathematics at Brown University in 2015, and, prior to joining Penn in 2018, he was a postdoctoral researcher at the department of Mechanical Engineering at the Massachusetts Institute of Technology. His current research interests include physics-informed machine learning, uncertainty quantification, and engineering design optimization. His work and service has received several distinctions including the DOE Early Career Award (2018), the AFOSR Young Investigator Award (2019), the Ford Motor Company Award for Faculty Advising (2020), the SIAG/CSE Early Career Prize (2021), and the Scialog Fellowship (2021).

Video Recording (YouTube). Download Slides

9 March, 2023 15:30 (local Swedish time)

OpenFold: Lesson learned and insights gained from rebuilding and retraining AlphaFold2

Mohammed AlQuraishi (Columbia University)


AlphaFold2 revolutionized structural biology by accurately predicting protein structures from sequence. Its implementation however (i) lacks the code and data required to train models for new tasks, such as predicting alternate protein conformations or antibody structures, (ii) is unoptimized for commercially available computing hardware, making large-scale prediction campaigns impractical, and (iii) remains poorly understood with respect to how training data and regimen influence accuracy. Here we report OpenFold, an optimized and trainable version of AlphaFold2. We train OpenFold from scratch and demonstrate that it fully reproduces AlphaFold2’s accuracy. By analyzing OpenFold training, we find new relationships between data size/diversity and prediction accuracy and gain insights into how OpenFold learns to fold proteins during its training process.

Mohammed AlQuraishi is an Assistant Professor in the Department of Systems Biology and a member of Columbia's Program for Mathematical Genomics, where he works at the intersection of machine learning, biophysics, and systems biology. The AlQuraishi Lab focuses on two biological perspectives: the molecular and systems levels. On the molecular side, the lab develops machine learning models for predicting protein structure and function, protein-ligand interactions, and learned representations of proteins and proteomes. On the systems side, the lab applies these models in a proteome-wide fashion to investigate the organization, combinatorial logic, and computational paradigms of signal transduction networks, how these networks vary in human populations, and how they are dysregulated in human diseases, particularly cancer.

Dr. AlQuraishi holds undergraduate degrees in biology, computer science, and mathematics. He earned an MS in statistics and a PhD in genetics from Stanford University. He subsequently joined the Systems Biology Department at Harvard Medical School as a Departmental Fellow and a Fellow in Systems Pharmacology, where he developed the first end-to-end differentiable model for learning protein structure from data. Prior to starting his academic career, Dr. AlQuraishi spent three years founding two startups in the mobile computing space. He joined the Columbia Faculty in 2020.

Video Recording (YouTube). Download Slides

10 March, 2022 15:30 (local Swedish time)

Multimodal Machine Learning for Protein Engineering

Kevin Yang (Microsoft Research)


Engineered proteins play increasingly essential roles in industries and applications spanning pharmaceuticals, agriculture, specialty chemicals, and fuel. Machine learning could enable an unprecedented level of control in protein engineering for therapeutic and industrial applications. Large self-supervised models pretrained on millions of protein sequences have recently gained popularity in generating embeddings of protein sequences for protein property prediction. However, protein datasets contain information in addition to sequence that can improve model performance. This talk will cover pretrained models that use both sequence and structural data, their application to predict which portions of proteins can be removed while retaining function, and a new set of protein fitness benchmarks to measure progress in pretrained models of proteins.

Kevin Yang is a senior researcher at Microsoft Research in Cambridge, MA who works on problems at the intersection of machine learning and biology. He did his PhD at Caltech with Frances Arnold on applying machine learning to protein engineering. Before joining MSR, he was a machine learning scientist at Generate Biomedicines, where he used machine learning to optimize proteins. Before graduate school, Kevin taught math and physics for three years at a high school in Inglewood, California through Teach for America.

Video Recording (YouTube). Download Slides

9 June, 2022 14:00 (local Swedish time)

De novo drug design with chemical language models

Francesca Grisoni (Eidenhoven University of Technology)


Artificial intelligence (AI) is fueling computer-aided drug discovery. Chemical language models (CLMs) constitute a recent addition to the medicinal chemist’s toolkit for AI-driven drug design. CLMs can be used to generate novel molecules in the form of strings (e.g., SMILES, SELFIES) without relying on human-engineered molecular assembly rules. By taking inspiration from natural language processing, CLMs have shown able to learn “syntax” rules for molecule generation, and to implicitly capture “semantic” molecular features, such as physicochemical properties, bioactivity, and chemical synthesizability. This talk will illustrate some successful applications of CLMs to design novel bioactive compounds from scratch in the context of drug discovery, at the interface between theory and wet-lab experiments. Moreover, the talk will provide a personal perspective on current limitations and future opportunities for AI in medicinal and organic chemistry, to accelerate molecule discovery and chemical space exploration.

Francesca Grisoni is a tenure-track Assistant Professor at the Eindhoven University of Technology, where she leads the Molecular Machine Learning team. After receiving her PhD in 2016 at the University of Milano-Bicocca, with a dissertation on machine learning for (eco)toxicology, Francesca worked as a data scientist and as a biostatistical consultant for the pharmaceutical industry. Later, she joined the University of Milano-Bicocca (in 2017) and the ETH Zurich (in 2019) as a postdoctoral researcher, working on machine learning for drug discovery and molecular property prediction. Her current research focuses on developing novel chemistry-centered AI methods to augment human intelligence in drug discovery, at the interface between computation and wet-lab experiments.

Video Recording (YouTube). Download Slides

12 May, 2022 14:00 (local Swedish time)

AI for Quantum Experiments

Evert van Nieuwenburg (NBI, University of Copenhagen)


In this talk I aim to showcase how machine learning inspired optimisations can help with current state-of-the-art experiments. In particular, I will first consider the readout of semiconductor spin qubits using simple principal component analysis. I will then highlight a specifically fabricated semiconductor device with a 3x3 ‘pixel array’, and discuss the simultaneous tuning of those 9 gate voltages to construct a quantum point contact. And finally, I will move on to larger arrays of quantum dots and the detection of transitions between charge states (i.e. finding the facets of high-dimensional coulomb diamonds).

Evert is a theoretical condensed matter physicist with a background in open systems, numerical simulations and many-body effects. He now also actively works on investigating how both condensed matter physics and machine learning can help each other.

Video Recording (YouTube). Download Slides

12 January, 2023 15:30 (local Swedish time)

Ab initio thermodynamics

Bingqing Cheng (Institute of Science and Technology Austria)


Prof. Bingqing Cheng moved to the Institute of Science and Technology (IST) Austria as a Tenure-Track Assistant Professor on September 2021. Before she was a Departmental Early Career Fellow in the Computer Laboratory, University of Cambridge (11/2020– 08/2021), and a Junior research fellow at Trinity College (03/2019-). She did a PhD (09/2014–02/2019) in Materials Science at École Polytechnique Fédérale de Lausanne (EPFL), supervised by Michele Ceriotti, a Master’s degree in The University of Hong Kong, and a joint Bachelor’s degree in The University of Hong Kong & Shanghai Jiao Tong University.

Video Recording (YouTube). Download Slides

14 April, 2022 15:00 (local Swedish time)

Data-driven discovery of coordinates and governing equations

Bethany A Lusch (Argonne National Lab)


Governing equations are essential to the study of physical systems, providing models that can generalize to predict previously unseen behaviors. There are many systems of interest across disciplines where large quantities of data have been collected, but the underlying governing equations remain unknown. This work introduces an approach to discover governing models from data. The proposed method addresses a key limitation of prior approaches by simultaneously discovering coordinates that admit a parsimonious dynamical model. Developing parsimonious and interpretable governing models has the potential to transform our understanding of complex systems, including in neuroscience, biology, and climate science.

Dr. Bethany Lusch is an Assistant Computer Scientist in the data science group at the Argonne Leadership Computing Facility at Argonne National Lab. Her research expertise includes developing methods and tools to integrate AI with science, especially for dynamical systems and PDE-based simulations. Her recent work includes developing machine-learning emulators to replace expensive parts of simulations, such as computational fluid dynamics simulations of engines and climate simulations. She is also working on methods that incorporate domain knowledge in machine learning, representation learning, and using machine learning to analyze supercomputer logs. She holds a PhD and MS in applied mathematics from the University of Washington and a BS in mathematics from the University of Notre Dame.

Video Recording (YouTube). Download Slides

8 June, 2023 14:30 (local Swedish time)

Causal Experimental Design

Stefan Bauer (Helmholtz Munich / TU Munich / CIFAR)


Deep neural networks have achieved outstanding success in many tasks ranging from computer vision, to natural language processing, and robotics. However such models still pale in their ability to understand the world around us, as well as generalizing and adapting to new tasks or environments. One possible solution to this problem are causal models, since they can reason about the connections between causal variables and the effect of intervening on them. This talk will introduce the fundamental concepts of causal inference, connections and synergies with deep learning as well as practical applications and advances in sustainability and AI for science.

Stefan Bauer is a professor at TU Munich, group leader at Helmholtz Institute Munich and a CIFAR Azrieli Global Scholar. Using and developing tools of causality, deep learning and real robotic systems, his research focuses on the longstanding goal of artificial intelligence to design machines that can extrapolate experience across environments and tasks. He obtained his PhD in Computer Science from ETH Zurich and was awarded with the ETH medal for an outstanding doctoral thesis. Before that, he graduated with a BSc and MSc in Mathematics from ETH Zurich and a BSc in Economics and Finance from the University of London. During his studies, he held scholarships from the Swiss and German National Merit Foundation. In 2019, he won the best paper award at the International Conference of Machine Learning (ICML) and in 2020, he was the lead organizer of the, a robotics challenge in the cloud.

Video Recording (YouTube). Download Slides

10 February, 2022 13:30 (local Swedish time)

Zoom and Enhance: Towards Multi-Scale Representations in the Life Sciences

Bastian Rieck (Helmholtz Pioneer Campus and Technical University of Munich)


With novel measurement technologies easily resulting in a deluge of data, we need to consider multiple perspectives in order to ‘see the forest for the trees.’ A single perspective or scale is often insufficient to faithfully capture the underlying patterns of complex phenomena, in particular in the life sciences. However, moving from an ‘either–or’ selection of relevant scales to a ‘both–and’ utilisation of all scales promises better insights and improved expressivity. The emerging field of topological machine learning provides us with effective tools for building multi-scale representations of complex data. This talk presents two use cases that demonstrate the power of learning such representations. The first use case involves improving antimicrobial resistance prediction—a critical problem in a world suffering from superbugs—while the second use case permits us a glimpse into how cognition changes from early childhood to adolescence.

Bastian is Principal Investigator of the AIDOS Lab at the Institute of AI for Health and the Helmholtz Pioneer Campus, focusing on machine learning methods in biomedicine. Dr. Rieck is also TUM Junior fellow and a member of ELLIS. Dr. Rieck was previously senior assistant in the Machine Learning & Computational Biology Lab of Prof. Dr. Karsten Borgwardt at ETH Zürich and was awarded his Ph.D.  in computer science from Heidelberg University.

Video Recording (YouTube). Download Slides