Greetings everyone, I'm Lorenzo, Ph.D. Candidate in Artificial Intelligence at the University of Geneva. I work with Stéphane Marchand-Maillet
as a member of the VIPER group, which closely collaborates with the DMML group. My research is focused on developing self-supervised (SSL) and active learning (AL) strategies to streamline the costly process of acquiring labels in high-dimensional data tasks.
Specializing in Graph Neural Networks (GNNs), my work ranges from graph adversarial learning to representation learning, particularly on anomaly detection tasks in self or weakly supervised contexts.
I am also developing generative models (e.g., DDPMs) for 3D genomics, flow cytometry, scRNA, spatial transcriptomics, and multi-omics data. Lately, I've been focusing on using LLMs to streamline routine hospital processes.
In collaboration with the Geneva University Hospital (HUG), I'm currently working on detecting Minimal Residual Disease (MRD) of Acute Lymphoblastic and Myeloid Leukemia
from Flow Cytometry data, where SSL and AL methods help reduce the burden of physician-led annotation by enhancing data quality and efficiency. In this scenario, it is crucial to develop efficient training and inference models for single-cell classification.
Additionally, I'm also involved as a teaching assistant for the “Introduction to Computational Finance”, “Natural Language Processing” and "Information Retrieval" courses at the CUI.
Check out our latest works:
- The first comprehensive benchmark for multi-class single-cell classification on flow cytometry data, using GNNs and many other state-of-the-art single-cell deep learning techniques!
- A novel method to inject biological priors on different state-of-the-art GNNs for hierarchical single-cell classification; our plug-in module FCHC-GNN!
- Decoding attention for domain-dependent interpretable GNNs: the first study revealing the emergence of Massive Activations (MAs) within GNNs attention mechanisms. By rigorously comparing activation distributions between untrained (base) and fully trained models across architectures such as GraphTransformer, GraphiT, and SAN (and on diverse benchmarks like ZINC, TOX21, and OGBN-PROTEINS) we reveal how MAs emerge, their direct link to learned biases, and their impact on model robustness. Furthermore, our work introduces a novel detection framework based on activation ratio distributions and proposes the Explicit Bias Term (EBT) as an effective countermeasure to turn MAs off!
My Selected Papers
Publications List
See the publications list for a more complete report.
Contact Me
Email: lorenzo.bini@unige.ch
Geneva-based: Department of Computer Science, BAT A, Route de Drize 7 1227 Carouge, Switzerland, Office Room #221