depyf: Open the Opaque Box of PyTorch Compiler for Machine Learning Researchers
AuthorsKaichao You**†, Runsheng Bai†, Meng Cao, Jianmin Wang†, Ion Stoica‡, Mingsheng Long†
depyf: Open the Opaque Box of PyTorch Compiler for Machine Learning Researchers
AuthorsKaichao You**†, Runsheng Bai†, Meng Cao, Jianmin Wang†, Ion Stoica‡, Mingsheng Long†
PyTorch \texttt{2.x} introduces a compiler designed to accelerate deep learning programs. However, for machine learning researchers, adapting to the PyTorch compiler to full potential can be challenging. The compiler operates at the Python bytecode level, making it appear as an opaque box. To address this, we introduce \texttt{depyf}, a tool designed to demystify the inner workings of the PyTorch compiler. \texttt{depyf} decompiles bytecode generated by PyTorch back into equivalent source code, and establishes connections between in-memory code objects and their on-disk source code counterparts. This feature enables users to step through the source code line by line using debuggers, thus enhancing their understanding of the underlying processes. Notably, \texttt{depyf} is non-intrusive and user-friendly, primarily relying on two convenient context managers for its core functionality.
UICoder: Finetuning Large Language Models to Generate User Interface Code through Automated Feedback
August 15, 2025research area Human-Computer Interaction, research area Methods and Algorithmsconference NAACL
Large language models (LLMs) struggle to consistently generate UI code that compiles and produces visually relevant designs. Existing approaches to improve generation rely on expensive human feedback or distilling a proprietary model. In this paper, we explore the use of automated feedback (compilers and multi-modal models) to guide LLMs to generate high-quality UI code. Our method starts with an existing LLM and iteratively produces improved…
Deploying Transformers on the Apple Neural Engine
June 6, 2022research area Computer Vision, research area Speech and Natural Language Processing
An increasing number of the machine learning (ML) models we build at Apple each year are either partly or fully adopting the Transformer architecture. This architecture helps enable experiences such as panoptic segmentation in Camera with HyperDETR, on-device scene analysis in Photos, image captioning for accessibility, machine translation, and many others. This year at WWDC 2022, Apple is making available an open-source reference PyTorch implementation of the Transformer architecture, giving developers worldwide a way to seamlessly deploy their state-of-the-art Transformer models on Apple devices.