Fengxue (Alex) Zhang is a Ph.D. student in Computer Science at the University of Chicago, specializing in machine learning with a focus on decision-making under uncertainty.

Research Interest

His research interests center on learning and decision-making under conditions of uncertainty. Specifically, he focuses on developing novel deep learning and statistical learning methods with applications spanning protein design, materials science, and large language models. Through rigorous uncertainty quantification, he develops sample-efficient approaches that actively gather data and propose new candidates, optimizing domain-specific objectives while maintaining robustness and reliability.

Publications

Conference Papers

Robust Multi-fidelity Bayesian Optimization with Deep Kernel and Partition
Fengxue Zhang, Thomas Desautels, Yuxin Chen
The 28th International Conference on Artificial Intelligence and Statistics (AISTATS 2025)
[Paper] (Camera-ready comming Soon)

Develops a robust multi-fidelity Bayesian optimization framework that leverages deep kernels and partition techniques to handle noisy and heterogeneous information sources. This method enables efficient optimization when dealing with multiple fidelity levels of varying reliability and cost.

Constrained Multi-objective Bayesian Optimization through Optimistic Constraints Estimation
Diantong Li, Fengxue Zhang, Chong Liu, Yuxin Chen
The 28th International Conference on Artificial Intelligence and Statistics (AISTATS 2025)
Paper | Code

Presents a novel approach to constrained multi-objective Bayesian optimization that uses optimistic constraint estimation to efficiently explore the feasible Pareto frontier. This method effectively handles multiple objectives while ensuring constraint satisfaction with high probability.

No-Regret Learning of Nash Equilibrium for Black-Box Games via Gaussian Processes
Minbiao Han*, Fengxue Zhang*, Yuxin Chen (* equal contribution)
The 40th Conference on Uncertainty in Artificial Intelligence (UAI 2024)
Paper | Code

Introduces a novel approach for learning Nash equilibria in black-box games using Gaussian processes. This work bridges the gap between game theory and Bayesian optimization, providing theoretical guarantees for convergence to Nash equilibrium without requiring explicit access to game payoff functions.

Learning Regions of Interest for Bayesian Optimization with Adaptive Level-set Estimation
Fengxue Zhang, Jialin Song, James C. Bowden, Alexander Ladd, Yisong Yue, Thomas Desautels, Yuxin Chen
International Conference on Machine Learning (ICML 2023)
Paper | Code

Presents an innovative approach to Bayesian optimization that efficiently identifies and focuses on regions of interest through adaptive level-set estimation. This method significantly improves sample efficiency in high-dimensional optimization problems by concentrating exploration in promising areas.

Workshop Papers

Robust Multi-fidelity Bayesian Optimization with Deep Kernel and Partition
Fengxue Zhang, Thomas Desautels, Yuxin Chen
NeurIPS 2024 Workshop on Bayesian Decision-making and Uncertainty
Paper

Develops a robust framework for multi-fidelity Bayesian optimization that combines deep kernels with intelligent space partitioning. This approach effectively handles varying fidelity levels in optimization tasks while maintaining computational efficiency.

Finding Interior Optimum of Black-box Constrained Objective with Bayesian Optimization
Fengxue Zhang, Zejie Zhu, Yuxin Chen
NeurIPS 2024 Workshop on Bayesian Decision-making and Uncertainty
Paper

Addresses the challenging problem of finding interior optima in constrained optimization problems using Bayesian optimization. The method provides practical solutions for scenarios where boundary solutions are undesirable or infeasible.

Constrained Multi-objective Bayesian Optimization
Diantong Li, Fengxue Zhang, Chong Liu, Yuxin Chen
NeurIPS 2024 Workshop on Bayesian Decision-making and Uncertainty
Paper

Presents a novel approach to handling multiple objectives in Bayesian optimization while respecting constraints. The work extends the capability of BO to more complex real-world scenarios with multiple competing objectives.

Design of Physical Experiments via Collision-free Latent Space Optimization
Fengxue Zhang, Yair Altas, Louise Fan, Kaustubh Vinchure, Brian Nord, Yuxin Chen
NeurIPS 2020 Workshop on Machine Learning and the Physical Sciences
Paper | Poster

Introduces an innovative approach to designing physical experiments through optimization in a collision-free latent space. This work bridges the gap between theoretical optimization and practical experimental design constraints.