Bahare Fatemi
Bahare Fatemi is a Research Scientist at Google Research in Montreal, specializing in graph representation learning and natural language processing. She received her Ph.D. From the University of British Columbia. Her work has been features in top AI conferences and journals including NeurIPS, ICLR, AAAI, and JMLR. She co-organized Mining and Learning with Graphs workshop at KDD, Women in Machine Learning (WiML) workshop, and the Montreal AI Symposium. She currently serves on the WiML board.
Authored Publications
Sort By
Preview abstract
Graphs are a powerful tool for representing and analyzing complex relationships in real-world applications such as social networks, recommender systems, and computational finance. Reasoning on graphs is essential for drawing inferences about the relationships between entities in a complex system, and to identify hidden patterns and trends. Despite the remarkable progress in automated reasoning with natural text, reasoning on graphs with large language models (LLMs) remains an understudied problem. In this work, we perform the first comprehensive study of encoding graph-structured data as text for consumption by LLMs. We show that LLM performance on graph reasoning tasks varies on three fundamental levels: (1) the graph encoding method, (2) the nature of the graph task itself, and (3) interestingly, the very structure of the graph considered. These novel results provide valuable insight on strategies for encoding graphs as text. Using these insights we illustrate how the correct choice of encoders can boost performance on graph reasoning tasks inside LLMs by 4.8% to 61.8%, depending on the task.
View details
TpuGraphs: Performance Prediction Datasets on Large Tensor Computational Graphs
Mangpo Phothilimthana
Kaidi Cao
Charith Mendis
Advances in Neural Information Processing Systems (2023)
Preview abstract
Precise hardware performance models play a crucial role in code optimizations. They can assist compilers in making heuristic decisions or aid autotuners in identifying the optimal configuration for a given program. For example, the autotuner for XLA, a machine learning compiler, discovered 10–20\% speedup on state-of-the-art models serving substantial production traffic at Google. Although there exist a few datasets for program performance prediction, they target small sub-programs such as basic blocks or kernels. This paper introduces TpuGraphs, a performance prediction dataset on full tensor programs, represented as computational graphs, running on Tensor Processing Units (TPUs). Each graph in the dataset represents the main computation of a machine learning workload, eg, a training epoch or an inference step. Each data sample contains a computational graph, a compilation configuration, and the execution time of the graph when compiled with the configuration. The graphs in the dataset are collected from open-source machine learning programs, featuring popular model architectures (eg, ResNet, EfficientNet, Mask R-CNN, and Transformer). TpuGraphs provides 25x more graphs than the largest graph property prediction dataset (with comparable graph sizes), and 770x larger graphs on average compared to existing performance prediction datasets on machine learning programs. This graph-level prediction task on large graphs introduces new challenges in learning, ranging from scalability, training efficiency, to model quality.
View details
Preview abstract
With modern fluorescent probes and light microscopes, the possibilities of monitoring the dynamics of cells, organelles, molecular complexes and even single molecules with high spatiotemporal resolution are more than ever [1]. Characterizing the motion of cellular and molecular entities can reveal a great deal of information about their functions and interactions and overall activity landscape [2]. Motion characterization is generally the end product of an image analysis pipeline that starts with identifying and localizing the objects in each image (segmentation/detection), tracking them over time, and then analyzing the resulting trajectories to characterize the objects’ motion. Whether objects are large (e.g. cells or organelles) or small (e.g. molecules or molecular complexes), as long as they can be represented by coordinates (e.g. cell centroid position or molecular position), following them over time in a series of images is effectively a (multiple) particle tracking problem. In their recent publication in Nature Machine Intelligence, Pineda et al. [3] describe a powerful deep learning approach based on graph neural networks (GNNs) for particle tracking or, if desired, for the characterization of object motion without explicit tracking.
Particle tracking is often the most challenging step in the analysis pipeline from images to motion characterization. Whenever the analysis involves a relatively high density of heterogeneously moving objects, there is ambiguity in determining which object has gonewent where throughout the image series [4]. In addition, objects may merge with each other – due to crossing paths or interactions – and may undergo splitting, such as during cell division. Despite these challenges, a high density of tracked objects is often desired, because of the rich information that it yields about the system studied [5]. The novel GNN-based approach by Pineda et al. [3], named MAGIK, offers solutions to the tracking problem in two ways: First, MAGIK can be employed to construct the trajectories of the imaged objects from the graph of their connections in space and time. Second, MAGIK can be employed to characterize the motion of the imaged objects directly from their graph of spatiotemporal connections, without explicit tracking.
Graphs are ubiquitously used in science to represent complex systems of interacting objects, from molecules to social and transportation networks [6]. GNNs provide a framework for incorporating existing information about the objects, with an inductive bias based on a larger structure relating them, to make predictions about these objects or the system as a whole. In MAGIK [3], spatiotemporal connections between imaged objects, encoded in the structure of the graph, provide this inductive bias , with the premise that objects close in space-time are likely to be the same. MAGIK utilizes this graph representation in a powerful way by employing GNNs [7] to perform various tracking and motion characterization tasks. The GNN model proposed in MAGIK considers both spatial and temporal information in a static graph. This model is enhanced by an adaptive and interpretable attention mechanism. Attention estimates the strength of association among the objects and provides insights into the dynamics of the system for the task.
GNNs enable MAGIK to provide a versatile platform for performing multiple tasks from linking coordinates into trajectories to inferring local and global dynamic properties. MAGIK is tested for its flexibility and reliability in real and simulated scenarios corresponding to a variety of biological experiments. The results of the tests show that MAGIK is able to identify which spatiotemporal connections in a graph influence the dynamic properties of each object. They further show that MAGIK accurately constructs trajectories, obtaining outstanding results for cell tracking, including the identification of cell division events, using multiple microscopy techniques and cell types. As in most applications the final goal of tracking is to characterize the dynamics of the system, Pineda et al. [3] have tested MAGIK for quantifying motion parameters without explicit tracking, and they have shown that MAGIK can accurately and sensitively quantify local or global motion properties of the imaged objects. Technically, MAGIK performs these various tasks by tailoring its training to the task: Tracking as a graph edge classification task, local motion motion characterization as a graph node regression task, and global motion characterization as a graph-level regression or classification task.
As demonstrated by MAGIK, GNNs offer powerful tools for the analysis of the spatiotemporal connections between objects in biological images. New developments in the fields of graphs and GNNs will further advance this goal. One possibility is to replace the fixed graph and the fully connected graph in MAGIK with a learnable sparse graph [8]. Another possibility is to use hypergraphs, which go beyond binary connections (a fundamental limitation of graphs). This would be a promising approach to characterize the spatiotemporal connections of systems with complex interactions [9]. Furthermore, as the problem studied here is temporal in nature, it may benefit from temporal GNNs [10], which directly incorporate time into the GNN formulation. All in all, the powerful combination of cutting-edge microscopes, fluorescent probes and geometric deep learning analytical tools will aid with the study of the organization, dynamics and interactions of diverse systems, from molecules in a cell, to cells in a tissue, and beyond.
View details