top of page

What is Geometric Deep Learning?

Dernière mise à jour : 24 sept.

Often, during events or conferences, when I explain our work I have to mention Geometric Deep Learning (GDL) as a cornerstone of our research. But this usually leads to questions like, "What is Geometric Deep Learning?" or comments like, "This is the first time I'm hearing this term!" Despite being an important and well-established field, I have noticed that it remains relatively obscure, particularly outside specialized areas like structural biology, computer vision, and physics. Understanding this concept is also crucial for grasping emerging technologies, such as de novo protein design. So I thought I'd shine some light and give a simple definition of GDL. I believe this will lay a good foundation for exploring topics like the application of generative AI in therapeutic solutions.


Let's begin with a simple question:


How can we build a model that learns and analyzes the structure of a protein, a highly complex 3D object in the Cartesian system?


We can use traditional deep learning algorithms but then we run into two major challenges:


1. The lack of a global frame of reference. Traditional deep learning algorithms are sensitive to transformations in input data (e.g., image rotation) and might perceive transformed data as a new, unseen sample. In the context of proteins, features like biological function or binding affinity are irrelevant to global translations and orientations of the protein structure.

2. The complexity of the 3D structures leading to the curse of high dimensionality.


GDL addresses these challenges by using low-dimensional geometric priors and associated symmetry groups, ensuring invariance to transformations like rotations and permutations while maintaining the properties of protein structures intact. 


But GDL isn't just about 3D objects! It extends deep learning techniques to non-Euclidean domains like graphs and manifolds. For instance, proteins can be represented as graphs, where amino acids are nodes connected by edges. They can also be represented as meshes or point clouds. Out of all the different ways to represent/learn geometric data, graph representation learning is my favorite. It is an exciting category of architectures known as Graph Neural Networks (GNNs). These networks learn intricate, high-dimensional representations of graphs using propagation (message passing) between nodes via edges. GNNs deserve their own post!


We can represent a protein structure, or any (bio-)molecule, as a graph. Nodes can be amino acids and edges can be covalent or molecular interactions. Nodes communicate with each other and update their feature vectors through a process known as message passing. This representation allows us to leverage graph learning architectures and learn the properties of the structure.


Here are some awesome visualizations that highlight the importance of equivariance in GDL. It's a core feature where the output of an equivariant layer undergoes the same predictable transformation as its input.




In the world of protein structures, thanks to the local equivariance property, not only does GDL find structural motifs regardless of their position and orientation, but it also considers the relative orientation and position of these motifs which is necessary for the right information aggregation.


Also, check out this video from [3]:



Many researchers from different groups contributed to this field. Thanks to their mathematically rigorous work, we now have a solid foundation for GDL, with numerous applications across different domains. Here are a few of these references:



In future posts, I'll go deeper into the details and share educational resources where you can learn more about GDL.

345 vues

Posts récents

Voir tout
bottom of page