Dinner on Math, Representations, and Neural Nets
At dinner, I sat at a table with a physicist, a dude from the 70s (now elderly), and a gentleman from Miami who had obtained weed from his Uber driver. I learned the following:
- One of the reasons why Fortran is still used is because it has language and compiler optimizations for numerical computation (I looked into this later and saw that the main optimizations were language agnostic — ex. decomposing matrices into submatrices (sized to fit the shape of the machine’s memory hierarchy) and performing subcomputations)
- That Fourier analysis (one of those classes I had to take in college, then forgot the material as soon as the quarter was over) is a subdomain of something called spectral analysis (…transforming things from one type of representation to another…)
- Group theory is an outgrowth of “symmetry”… I don’t quite understand what this means yet (I had asked the table what the practical applications of group theory were…)
I was curious about math and asked the physicist a lot of questions about it (when I asked him how he got into programming, he said “I solve equations by decomposing them into matrices and finding the stable representation”… eh? I’m guessing something to do with composing functions to do function approximation, and then finding the parameters for a system of equations… apologies to any mathematicians who read this). A couple of days later, it clicked for me that maybe there’s an abstract way revolving around framing things with the applicable representations:
- Tables of numbers can be represented as a graph (something we take for granted)
- Algebraic notation can be thought of as a short form representation of tasks people had to perform in ancient times
- Integral notation in calculus is another representation of a Riemann sum
- Frequency and time domain representation of a sinusoidal waveform (used in Fourier analysis, used in signal processing, applied in everything from radios to bluetooth to anything wireless)
I connected this back with a talk I saw a while ago from Bret Victor. The talk centered around how representations of thought relate to interfaces, although what clicked with me is that whether it’s user interfaces or math, or other stuff, representations are what allow different types of thoughts to be thought — they are something very fundamental. Maybe they can also be thought of as what allows communication… in the case of something like the interface of a device or the visualization of a dataset, isn’t that the representation through which it or the data is communicating to a interfacer/viewer, in the same way that choice of language is the representation used when humans are communicating with one another? It’s an interesting watch https://www.youtube.com/watch?v=agOdP2Bmieg
I’ll want to better understand how neural networks and linear algebra tie together… I ultimately want to understand what the boundaries of using connectionist techniques are. That’s important to me for two reasons: it’ll be important to understand what roles in society can be augmented by the technology (neural nets play Go and drive cars, but my understanding is that they can’t implement reasoning — standalone), and also since I’m interested in AGI it will be important to understand how the technology can be combined with other techniques (either ones developed in the 20th century or yet to be invented ones).
Could representations be the frameworks from which techniques are drawn? When a programmer is wearing procedural, OO, functional hats, for an instance.