Deep Learning Diagrams

Despite being mathematical systems, deep learning models are not systematically approached. Proper analysis requires various abstractions. Mathematically, we are interested in how functions feed into each other, how they are parallelized, and how simple linear operations can be rearranged. Practically, we are interested in the resource costs given some mathematical goal, allowing us to find the optimal execution strategy on parallelized GPU hardware. Typical approaches struggle to consider these different lenses.

Category theory's tools for studying abstractions allows us to relate these approaches. Furthermore, category theory allows us to develop a rigorous diagrammatic language which reflects these abstractions. Our methods have successfully expressed a variety of models in all their detail, exactly describing the constituent functions and associated parallelized and linearity properties. Additionally, In FlashAttention on a Napkin, we used diagrams to quickly derive optimized execution strategies and performance models. In contrast, typical methods take years of laborious research to derive these methods.

Works

This research opens many branches for further work at the intersection of category theory and the practical aspects of deep learning design, including optimizing resource usage. So far, we have published FlashAttention on a Napkin in addition to Vincent Abbott's previous works.

    Future work will encompass:

    • Formalizing the category theory further, developing a symbolic framework which captures diagrams and the graphical "moves" for deriving optimizations.
    • Developing an automated framework which uses a categorical data structure for converting between standard PyTorch implementations, to the symbolic framework, and then to optimized code and accurate performance models.
    • Creating a graphical dashboard which incorporates these tools, allowing deep learning engineers to work directly with diagrams and be automatically notified of various resource usage considerations.
    • Integrating our work on resource analysis of deep learning algorithms to categorical co-design, allowing us to optimize the full deep-learning stack, including hardware.

    The aim, then, is to use category theory to create an indispensible tool for innovating efficient deep learning models.

    Sample Diagrams of Complete Architectures

    Diagram of the original transformer architecture from Attention Is All You Need
    Diagram of the original transformer architecture from Attention Is All You Need (Jun 2017).
    Mixtral-8x7B (Dec 2023) is an open-source model which beat the original release of ChatGPT (Nov 2022). The attention block captures five years of innovation, while the feed-forward layer is completely changed from a fully-connected layer to a Mixture-of-Experts which has a vast amount of data, only some of which is incorporated with each pass.
    Diagram of DeepSeek-V3 (Dec 2024), an open-source model which caught up to the latest models from OpenAI, Google, and others. Note the innovations in the attention block, and the wide mixture-of-experts feed-forward layer used.