Taejong Joo

I am a fourth-year PhD student in the Department of Industrial Engineering & Management Sciences at Northwestern University, where I am fortunate to work with Diego Klabjan. My research focuses on robust machine learning, uncertainty estimation, foundation models, and AI alignment.

My aspiration is to develop AI systems with human-level adaptability and computational properties aligned with human preferences. Such AI systems would enable effective collaboration between humans and machines to tackle evolving real-world challenges that neither could solve alone.

Previously, I was a senior deep learning researcher at ESTsoft, where I worked on distribution shifts, contextual bandit, scalable variational inference, and model compression. I built the company's first ML model, which had been successfully operated in the Korean stock market.

Before that, I obtained my bachelor’s and master’s degrees at Hanyang University, where I worked on human-machine interactions. My research on formalizing human-machine interactions for safe and efficient manufacturing systems was featured among the top 50 most popular articles in IEEE Transactions on Human-Machine Systems.

I am actively seeking research roles, including internships and full-time positions. With work eligibility in the U.S. and Germany, I am open to relocation and eager to explore diverse domains. While I value publication opportunities, my main goal is to work on pressing challenges with high impact. I thrive in technically and culturally diverse teams and am eager to contribute my skills and perspectives. Thank you for your consideration!

For any inquiries, please feel free to contact me by email: taejong.joo [at] northwestern.edu

Selected Research

For the full list of my publications, visit my Google Scholar.


Research Image
Improving Self-Training Under Distribution Shifts via Anchored Confidence With Theoretical Guarantees Taejong Joo, Diego Klabjan
Neural Information Processing Systems (NeurIPS), 2024

  We prove that selectively promoting temporal consistency for confident predictions significantly enhances self-training performance under distribution shifts. This approach prevents the common issue of model collapse—where performance deteriorates after a few epochs of self-training—resulting in improved performances with attractive robustness properties.

Research Image
IW-GAE: Importance Weighted Group Accuracy Estimation for Improved Calibration and Model Selection in Unsupervised Domain Adaptation Taejong Joo, Diego Klabjan
International Conference on Machine Learning (ICML), 2024

  We introduce a new approach for simultaneously addressing model calibration and model selection in unsupervised domain adaptation: estimating the average accuracy across subpopulations. For efficient and accurate subpopulation accuracy estimation, we formulate the high-dimensional importance weight estimation problem into a more tractable coordinate-wise convex optimization problem.

Research Image
Being Bayesian about Categorical Probability Taejong Joo, Uijung Chung, Min-Gwan Seo
International Conference on Machine Learning (ICML), 2020

  We propose a scalable variational inference framework using a last-layer Dirichlet model as a new alternative to Bayesian neural networks. Our approach significantly enhances uncertainty representation ability of deterministic neural networks while preserving their strong generalization performances and efficiency unlike Monte Carlo dropout and deep ensembles.

Misc.

Guided by first principles and the elegance of Occam’s razor, I believe simplicity often reveals the deepest insights and leads to effective and versatile solutions (with far fewer headaches).

Outside of work, I enjoy experimenting in the kitchen as a self-proclaimed master chef (enthusiastically endorsed by my wife), playing tennis, splashing paint on canvas, and traveling.

Fun fact: My Erdős Number = 3: Taejong Joo -> Diego Klabjan -> Craig Tovey -> Paul Erdős.