Zekun Wang

Georgia Institute of Technology

profile_pic.JPG

TSRB 230B

Atlanta, GA 30332

About Me

Hello there! I’m Zekun “Anderson” Wang. I’m a second year Ph.D. student at Georgia Tech with Prof. Christopher J. MacLellan. I’m interested in the intersection of cognitive science and artificial intelligence and how we can “learn so much from so little” that supports far transfer. Before joining Georgia Tech, I completed my M.S. in Computer Science at the University of Michigan, where I had a great fortune to work with Dr. Joyce Chai and Dr. Rada Mihalcea. I obtained my B.S. in Computer Science and B.S. in Mathematics from the Pennsylvania State University, where I worked with Dr. Rebecca J. Passonneau in NLP.

Research Interests

I am deeply interested in the way we, particularly children, acquire and derive novel knowledge from past learned experiences (limited data) and then transfer far. Such learning scheme posts a stark contrast to the current deep learning paradigm, which is data-hungry and “modeling”, where “learning” is an active, interactive, efficient, adaptive and creative process. My research interests are exploring these fundemental limitations in deep learning and machine learning using approaches inspired by human cognitive functions. I’m interested in exploring the following fundemental questions:

  • Compositionality: One of the key component that I deemed essential that enabled us to learn fast and be creative. My hypothesis is that our mind creates heuristics to search and compose past knowledge (or parts of) to create the best fit for current understanding. Consider the example of learning math. In principle, if one knows how to perform elementary arithmetic, then one should can derive calculus, linear algebra, and so on. However, such derivation may never arrive without proper “kicks”, or the right heuristics. I’m interested 1. how we can develop a compositional memory and 2. how we can learn the best “kicks” for knowledge composition.
  • Continual Learning: A natural extension of compositionality. In the real world, we are constantly learning new things and at the same time we forget things. However, that doesn’t hugely impact on how we utilize old knowledge and acquire new knowledge. Theories in replay buffer, memory consolidation, and catastrophic forgetting suggest that our learning is not linear and might be stochastic.
  • Active Learning: Goal-oriented agents like human creates unique objectives for different needs. This requires the agent to 1. derive desires and 2. be able to plan subgoals to achieve the desires. My current apporach would be framing such problem in the contetx of reinforcement learning and inverse reinforcement learning.
  • Meta-Learning: Finally, meta-learning is the general underlining principle that enables all of the above. Human will learn how to learn math by adjusting their leanring behavior through practice and experience, and learn what past knowledge to use and what to ignore.

news

Sep 18, 2025 Our “Deep Taxonomic Networks for Unsupervised Hierarchical Prototype Discovery” was accepted to NeurIPS 2025 as a poster. See you in San Diego!

selected publications

  1. NeurIPS 2025
    Deep Taxonomic Networks for Unsupervised Hierarchical Prototype Discovery
    Zekun Wang, Ethan Haarer, Tianyi Zhu, and 2 more authors
    In , 2025
  2. ACS 2025
    Hierarchical Semantic Retrieval with Cobweb
    Anant Gupta, Karthik Singaravadivelan, Zekun Wang, and 1 more author
    In , 2025
  3. NeuS 2025
    Taxonomic Networks: A Representation for Neuro-Symbolic Pairing
    Zekun Wang, Ethan Haarer, Nicki Barari, and 1 more author
    In , 2025
  4. CogSci 2025
    Computer Vision Models Show Human-Like Sensitivity to Geometric and Topological Concepts
    Zekun Wang, and Sashank Varma
    In , 2025
  5. NAACL 2025
    Babysit A Language Model From Scratch: Interactive Language Learning by Trials and Demonstrations
    Ziqiao Ma, Zekun Wang, and Joyce Chai
    In Proceedings of the 2025 Conference of the Nations of the Americas Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 1: Long Papers), Apr 2025
  6. LREC-COLING
    Has It All Been Solved? Open NLP Research Questions Not Solved by Large Language Models
    Oana Ignat, Zhijing Jin, Artem Abzaliev, and 19 more authors
    Apr 2024
  7. ICOTS
    Foundations for AI-Assisted Formative Assessment Feedback for Short-Answer Tasks in Large-Enrollment Classes
    Susan Lloyd, Matthew Beckman, Dennis Pearl, and 3 more authors
    In Bridging the Gap: Empowering and Educating Today’s Learners in Statistics. Proceedings of the Eleventh International Conference on Teaching Statistics, Dec 2022