Last updated: 2023-01-06

Hi! I'm a second year PhD candidate in CSE at the University of Michigan. My research interest lies in the intersection of software systems and deep learning, with a recent focus on sustainability aspects such as energy consumption and carbon footprint. I lead the ML Energy initiative. I am fortunate to be advised by Professor Mosharaf Chowdhury and be part of SymbioticLab.

Find the PDF version of my CV here.


Zeus: Understanding and Optimizing GPU Energy Consumption of DNN Training

Jie You*, Jae-Won Chung* (*: co-primary authors), Mosharaf Chowdhury
Symposium on Networked Systems Design and Implementation (NSDI), 2023 (Acceptance Rate: 18.38%)
Website, DOI, arXiv, PDF, YouTube, Slides

ShadowTutor: Distributed Partial Distillation for Mobile Video DNN Inference

Jae-Won Chung, Jae-Yun Kim, and Soo-Mook Moon
International Conference on Parallel Processing (ICPP), 2020 (Acceptance Rate: 28.99%)
DOI, arXiv, PDF, YouTube, Slides


Zeus: Energy-Efficient DNN training on GPUs

Sep 2021 - Apr 2022
SymbioticLab @ UMich CSE

• Promoting energy as a first-class resource in DNN training on GPUs.
• Observation and optimization of GPU energy consumption.

Crane: A GPU Cluster Resource Manager for Elastic AutoML

Mar 2020 - May 2022

• Worked with Professor Byung-Gon Chun.
• Developed Crane, an GPU cluster resource manager for elastic AutoML workloads.
• Kubernetes backend for Crane, efficient AutoML scheduling for GPU clusters.
• Took charge of bootstrapping and mentoring newer Crane team members (interns and graduate students).

ShadowTutor: Server-client collaborative DNN inference

Dec 2019 - Jun 2020

• Worked with Professor Soo-Mook Moon.
• A novel server-client collaborative video DNN inference method that drastically reduces network traffic via intermittent knowledge distillation.
• Implemented with PyTorch & OpenMPI, evaluated using an NVIDIA Jetson Nano board as a client.

Meta-learning, Few-shot Classification

Jun 2019 - Dec 2019

• Worked with Professor Kyoung Mu Lee.
• Better meta-initialization points for Model-Agnostic Meta-Learning (MAML) using an LSTM-based neural memory.
• Augmenting the feature maps of MAML with task-aware class embeddings generated with a convex program (DPP).

Deep Learning for Quantitative Susceptibility Mapping (QSM)

Jun 2019 - Aug 2019

• Worked with Professor Jongho Lee.
• Designed a U-Net variant and trained it on augmented MRI data.
• Participated in the QSM challenge held by the 5th International Workshop on MRI Phase Contrast and QSM.

Honors and Awards

Second Best Overall Solution

Nov 2022

Carbon Hack '22 (organized by the Green Software Foundation)

Carbon-Aware DNN Training with Zeus, $25,000

Kwanjeong Overseas Scholarship

Jul 2021

Kwanjeong Educational Foundation

Four years, $25,000 per year

Kwanjeong Undergraduate Scholarship

Mar 2019

Kwanjeong Educational Foundation

Two years, $10,000 per year



Sep 2022 - Present

Software/Systems Reading Group at Michigan CSE

External Activity


Dec 2018 - Present

A free research group on all domains of deep learning

• Gained experience extensively in computer vision and meta-learning, and attended talks on computer vision, natural language processing, reinforcement learning, and speech recognition.
• Gave a talk with the title "Memory plus Meta-Learning".

Language Coordinator

Oct 2018 - Mar 2021

An official Coursera community that traslates Coursera lecture subtitles

• Served as Language Coordinator, a selected position that reviews and confirms works by other translators.
• Created Korean subtitles for Coursera Lectures initially provided only in English. Focused on courses related to machine learning.


Operating Systems

Spring 2021

Main TA
Lectured on Linux kernels, managed term projects, and led group design reviews.

Computer Organization (Undergraduate architecture)

Fall 2020

Peer tutor
Provided 30 hours of online lecture, Best Tutor Award!

Skills & Proficiency

Python, PyTorch, Kubernetes

Rust, CUDA, Verilog

C++, Go

JavaScript, MATLAB

Open Source Projects

Pegasus: A Lightweight Multi-Node Parametrized Command Runner

"I really don't want to run all these experiment commands on 20 nodes manually."

Reason: A Shell for Research Papers

"Did I ever read this paper? Which papers have the word 'training' in their titles?"