“To understand human intelligence, you must reject all meager proxies for it. Seek a principled approach, reject mechanistic approaches, and do not compromise. Tell yourself the right story. Explain everything as simply and clearly as possible. Have a vision that is worth dedicating an entire career to, and solve problems that make progress toward your vision. When you tell people about the problems that you solve, make sure you first tell them what your vision is. Tell them in a way that will inspire somebody to dedicate an entire career to it.”

— Adam Kraft; Acknowledgments, “Vision by Alignment” (Ph.D. Thesis, June of 2018)

—  Status  (Click to see my curriculum vitae)  —

I am a first-year (to be second-year in September of 2018) Ph.D. student in the Computer Science department at Boston University, Massachusetts.  Currently I am a member of the LISP (Learning, Intelligence, and Signal Processing) group under the supervision of Professor Sang (“Peter”) Chin.  I am also a collaborating researcher with the Genesis group at the Computer Science and Artificial Intelligence Laboratory at Massachusetts Institute of Technology, under the supervision of Professor Patrick Henry Winston.

—  Research  —

At some point during my first meeting with Patrick Winston, I brought up something that I had been thinking on-and-off for a while, which was about how one might go about putting together, say, 100 “neurons” into a computational bundle and train up that bundle using Backpropagation, Fourier Transform, Markov Chains, or whatever mathematical tricks one can come up with.  The reason that I had been thinking about this was not because I adored the bottom-up approach so much; rather, it was because I was thinking very seriously about tackling the human-intelligence problem both top-down and bottom-up, so that these two approaches might, somehow, meet in the middle, and then, Bingo!, we have what we want.  Before I could finish my sentence, Professor Winston interrupted me and said, “Well, the answer is you’re not gonna find it.  You can’t put a bunch of stuff into the soup and hope for the best.”  At some other point during the meeting, I also brought up my concern that, someday when we will have developed a sufficient understanding of human intelligence and felt confident that we can finally build truly intelligent machines, we then ought to pay lots of attention to the energy consumption of those machines, so that they not only do what we do and can do more than we can do, but also do so in an energy-efficient way and help us solve the serious problems that we are and will be facing.  Professor Winston grinned and, without a pause, he said, “See, that’s the kind of stuff you’re only able to do when you know how to do it.”

I had been thinking about what it meant to think for a long time, and right now, that has led me to commit to research on developing a top-to-bottom computational understanding of human intelligence.  I have become more convinced than ever that the right way to do this is to take a top-down approach.  I fully concur that it is important to model and understand how information is relayed in our sensory and motor systems, because after all, our intelligence lies right within our “I/O channels,” and those I/O channels are our perceptual and motor faculties.  Brain MRIs are able to show things such as that, for instance, when I close my eyes right now and visualize how my computer screen might be filled with different stuff, the same brain areas light up as if I actually were looking at a changed computer screen; when I close my eyes and visualize how I might reach out my left hand for my water bottle at the corner of my desk, the same brain areas light up as if I actually were reaching out my left hand and grasping my water bottle according its diameter and its current distance from me.  Similar empirical results can be found related not to just vision and fingers, wrists, and arms, but also to hearing, touching, smelling, tasting, and neck, shoulders, torso, legs, toes, etc., suggesting that our perceptual and motor faculties and our thinking are interwined — rather than perception and motor first, and then thinking.  So, yes, I concur that neurological and psychological researches devoted to such findings are fundamentally indispensible to a top-to-bottom computational understanding of our intelligence.  Nonetheless, I do not believe that perception and motor alone tell the whole story.  One of the marvelous abilities that animals with brains possess, and animals without brains lack, is the ability to carry out higher-level processing of perceptual information and make use of such processed information to, in turn, carry out meaningful actions in the physical world.  The more intelligent a species is, the better such higher-level processing that species is able to carry out.  We, as the most intelligent species by a non-incrementally large margin to the second-most intelligent species, must possess a very unique set of such higher-level processing cognitive capabilities.

What exactly are in this unique set of capabilities?  Hmmm, that’s the research question that I am and will be after.  For the appetizer, I have now adopted a view that what’s needed is to model the part of our intelligence that distinguishes us from all other animals, and the distinguishing element is our unique ability to tell and understand stories — an ability at a level that no other species matches.  In terms of the entrée, I confess that I must do much more work to build it.  Exactly how much are we talking?  Well, as far as I can see in the moment, the probability that I won’t be devoting my entire (or almost entire) career to this question is bounded above by some exponentially small number.
[More to be said, and subject to further changes...]
P.S.:  Notice that, up to this point on this page, I haven’t mentioned the keywords “Artificial Intelligence” or “AI” or “Cognitive Science” or any of their synonyms in my text. (Ok, I did mention “computational,” “understand,” “human intelligence,” and “cognitive.” Fair.)  Yet, you have no trouble understanding that I’m not just talking about AI but also trying to do AI, and that I’m already beginning to make some progress.  Why have you no trouble?  Because you can understand not just simple toy stories, but also complex real-life stories like the ones I’ve just told.

—  Teaching  —
  • During Fall of 2017, I was a teaching assistant for CS101, a general introductory course on computer science.
  • During Spring of 2018, I was a teaching assistant for CS132, an introductory course on geometric algorithms.
  • During Summer1 of 2018, I was a teaching assistant for CS132.
  • During Summer2 of 2018, I was a teaching assistant for CS542, Machine Learning.

—  Myself   (Click to see more about me)  —
  • Having been playing the piano since the age of five, competed a few times, and won a few awards, I self-identify as a semi-professional pianist.  Although today I am by no means a professional pianist or musician for that matter, my extensive experience with music continues to aid me in thinking about human cognition.  I also picked up playing the guitar while in high school, but I have been doing it very much on-and-off, and my guitar skill is no match with my piano skill.
  • Having earned a blue belt in Taekwondo at the age of twelve but subsequently quit, I self-identify as a martial-arts dilettante, with a continuing enthusiasm for fitness, martial arts, and self-defense.
  • Having earned some kind of first prize in sketching when I was nine but subsequently quit, I continue to benefit from my ability to visualize 3D spaces and sketch them out relatively well.