Zekun Wang
TSRB 230B
Atlanta, GA 30332
About Me
Hello there! I’m Zekun “Anderson” Wang. I’m a second year Ph.D. student at Georgia Tech with Prof. Christopher J. MacLellan. I’m interested in the intersection of cognitive science and artificial intelligence and how we can “learn so much from so little” that supports far transfer. Before joining Georgia Tech, I completed my M.S. in Computer Science at the University of Michigan, where I had a great fortune to work with Dr. Joyce Chai and Dr. Rada Mihalcea. I obtained my B.S. in Computer Science and B.S. in Mathematics from the Pennsylvania State University, where I worked with Dr. Rebecca J. Passonneau in NLP.
Research Interests
I am deeply interested in the way we, particularly children, acquire and derive novel knowledge from past learned experiences (limited data) and then transfer far. Such learning scheme posts a stark contrast to the current deep learning paradigm, which is data-hungry and “modeling”, where “learning” is an active, interactive, efficient, adaptive and creative process. My research interests are exploring these fundemental limitations in deep learning and machine learning using approaches inspired by human cognitive functions. I’m interested in exploring the following fundemental questions:
- Compositionality: One of the key component that I deemed essential that enabled us to learn fast and be creative. My hypothesis is that our mind creates heuristics to search and compose past knowledge (or parts of) to create the best fit for current understanding. Consider the example of learning math. In principle, if one knows how to perform elementary arithmetic, then one should can derive calculus, linear algebra, and so on. However, such derivation may never arrive without proper “kicks”, or the right heuristics. I’m interested 1. how we can develop a compositional memory and 2. how we can learn the best “kicks” for knowledge composition.
- Continual Learning: A natural extension of compositionality. In the real world, we are constantly learning new things and at the same time we forget things. However, that doesn’t hugely impact on how we utilize old knowledge and acquire new knowledge. Theories in replay buffer, memory consolidation, and catastrophic forgetting suggest that our learning is not linear and might be stochastic.
- Active Learning: Goal-oriented agents like human creates unique objectives for different needs. This requires the agent to 1. derive desires and 2. be able to plan subgoals to achieve the desires. My current apporach would be framing such problem in the contetx of reinforcement learning and inverse reinforcement learning.
- Meta-Learning: Finally, meta-learning is the general underlining principle that enables all of the above. Human will learn how to learn math by adjusting their leanring behavior through practice and experience, and learn what past knowledge to use and what to ignore.
news
Sep 18, 2025 | Our “Deep Taxonomic Networks for Unsupervised Hierarchical Prototype Discovery” was accepted to NeurIPS 2025 as a poster. See you in San Diego! |
---|
selected publications
- NeurIPS 2025Deep Taxonomic Networks for Unsupervised Hierarchical Prototype DiscoveryIn , 2025
- ACS 2025Hierarchical Semantic Retrieval with CobwebIn , 2025
- NeuS 2025Taxonomic Networks: A Representation for Neuro-Symbolic PairingIn , 2025
- CogSci 2025Computer Vision Models Show Human-Like Sensitivity to Geometric and Topological ConceptsIn , 2025
- NAACL 2025Babysit A Language Model From Scratch: Interactive Language Learning by Trials and DemonstrationsIn Proceedings of the 2025 Conference of the Nations of the Americas Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 1: Long Papers), Apr 2025
- LREC-COLINGHas It All Been Solved? Open NLP Research Questions Not Solved by Large Language ModelsApr 2024
- ICOTSFoundations for AI-Assisted Formative Assessment Feedback for Short-Answer Tasks in Large-Enrollment ClassesIn Bridging the Gap: Empowering and Educating Today’s Learners in Statistics. Proceedings of the Eleventh International Conference on Teaching Statistics, Dec 2022