Author Spotlight

If Data Science Feels Like a Struggle, You Might Just Be on the Right Path

“Most good things come with a touch of discomfort.”

TDS Editors
Towards Data Science
6 min readApr 7, 2021

--

In the Author Spotlight series, TDS Editors chat with members of our community about their career path in data science, their writing, and their sources of inspiration. Today, we’re thrilled to present our conversation with Robert Tjarko Lange.

Image courtesy of Robert Lange

Robert is a second-year PhD student at the Technical University Berlin, working on reinforcement learning for large multi-agent systems. Previously, he completed a master’s degree in Computing at Imperial College London and dabbled into cognitive neuroscience while working at the Einstein Center for Neurosciences Berlin. He’s also a prolific TDS author, writing about the latest trends in deep learning and technological advances in the field. He loves to devour ice cream — at any time of the year — and enjoys going for long walks with his beloved (and, according to him, “much more photogenic”) four-legged companion.

You’re currently in the midst of completing a PhD on deep reinforcement learning—can you tell us about the path that led you there?

During my Economics undergraduate, I got fascinated by my statistics and game theory classes. They felt empowering and I spent many hours in the library reading up on John Nash and econometric techniques. At that point, I was convinced I could come closer to answer the one question that fascinated me the most: How can we make sense of human decision-making over time?

I loved the mystical tummy feeling of concepts like Monte Carlo simulation and Stackelberg equilibria. But I also knew that I had to go deeper and learn more about technical details. I decided to do a Data Science MSc in Barcelona and afterwards another Computer Science MSc at Imperial.

During that time I got to sit in a couple of computational neuroscience courses and worked on hierarchical reinforcement learning. Everything was interesting: cognition, motor control, variational inference, and non-convex optimization. I had a hard time reading up on everything I was interested in. At that point, I knew that I had to do a PhD at the intersection of machine learning, neuroscience, and collective decision-making.

Was it difficult to start out in one discipline and then chart a path towards another?

In the beginning, it was mentally challenging to make the transition from Economics. As long as I was in the Econ university bubble, it felt as if the next natural step was to do a PhD in Economics. Deciding to go to Barcelona was quite the leap for me—living abroad being 19 years old, not speaking Spanish or Catalan, not knowing anyone. At that time there was no comparable program in Germany and I wanted to push myself. I never really looked back and believe that most good things come with a touch of discomfort and struggle.

And then there were all the technical hurdles to overcome. Most of undergraduate economics maths deals with scalar values: GDP, inflation, unemployment rates. You learn a lot of real analysis and how to derive first-order conditions for household and investment problems. Not that much linear algebra, though. I remember spending a couple of weekends in the Barcelona library teaching myself standard engineering maths (Fourier analysis, Taylor expansion, etc.). Same holds for my programming skills and basic stuff like connecting to a remote machine via ssh.

I kept struggling, but day by day things became more natural and easier as I built up intuition and knowledge. But I also have to say that I had a set of great teachers, mentors, and co-students who helped me a lot. Since data science is so broad, it attracts a group of diverse people who all know something really well.

What pushed you to start writing about data science for a wider audience?

I love learning, visualizing conceptual frameworks, and restructuring my thoughts. Writing blog posts allows me to bring all of these things together. At the beginning, publishing took some courage. You never know — maybe people won’t like it. But with time I started gamifying most aspects of the writing process: Many of my blog posts start out as weekend side projects. I usually have a rough idea and want to know more about a topic. I do some research, take notes, and code up some small prototype. Afterwards, putting everything into a visually pleasing post does not take too much time and the anticipated satisfaction is really motivating.

I also love interacting with readers who deeply care about the content. I got to be in touch with many awesome people I would have probably not met otherwise. There are so many unknown externalities of blogging. Next thing you know, you might have scored your dream job just by sharing your passion.

What do you enjoy the most about the blogging process?

As silly as it may sound: I absolutely love putting together thumbnails for my blog posts and drawing visual illustrations on my iPad. It is really relaxing and something I look forward to when writing. Here is a small collection of some of my favourite ones:

Images courtesy of Robert Lange

I also enjoy documenting my growth. Going through some of my first posts always gives me goosebumps. It is basically a small mental album of my conceptions and thoughts from back then. Some of them have changed, others developed into projects, or connected me with new friends.

Another important part of blogging is that it helps me improve my writing skills. As a PhD student, you don’t get to write enough. Oftentimes you only start formulating your thoughts in words when you are about to wrap up a project. That is not a lot of experience—especially if at some point you have to write your own grant applications. Blogging helps me learn to structure my thoughts as well as the work I do that leads up to the blog post.

Data science and machine learning are dynamic fields; are there any developments you’d be especially excited to see in the short term?

When it comes to my field of research (multi-agent reinforcement learning), I am very excited about the recent revolution in the tracking of large-scale animal behaviour. Learning coordination from scratch is really hard since the joint action space grows exponentially in the number of considered agents. In order to make real progress beyond centralized control, as for example in DeepMind’s AlphaStar efforts, we need to find the right inductive biases.

Animal behaviour can provide us with a lot of insight into what may facilitate learning in large groups. That includes exploring the behaviour of fish schools and information-sharing in social learning tasks. Putting high-resolution collective tracking data into algorithms is a really challenging and exciting way forward.

I would also love to see more people share their experience with and insights into specialized topics. Personally, I have learned the most from experiments that did not work out—good hyperparameter ranges, tricks for training non-standard models and for scaling things up. There is real value in sharing your tricks of the trade. And everyone has something they know that is worth sharing with the world.

Curious to learn more about Robert’s work and research interests? A good place to start would be his Medium profile. Dig deeper by visiting his GitHub page, or find him on Twitter. Below are some of Robert’s recent highlights on TDS.

  • Four Deep Learning Papers to Read in April 2021 (TDS, April 2021)
    Rob’s series of curated scientific papers is a great service to the community, allowing data scientists to stay abreast of the most cutting-edge research. (While you’re at it, check out some of his earlier editions.)
  • The Lottery Ticket Hypothesis: A Survey (TDS, June 2020)
    This (very) deep dive into one of the most popular recent concepts in deep learning includes both a broad overview and a detailed literature review of key articles on the topic.
  • Evolving Neural Networks in JAX (TDS, February 2021)
    Rob’s posts don’t stop at high-level theory. This hands-on tutorial is a case in point, showing readers how the JAX library “can power the next generation of scalable neuroevolution algorithms.”
  • A Machine Learning Workflow for the iPad Pro (TDS, May 2020)
    Advanced data science work and even machine learning research can take place anywhere these days, including on your couch. This post walks us through Rob’s flexible iPad setup and discusses some of his favorite apps.

Stay tuned for our next featured author, coming soon! (If you have suggestions for people you’d like to see in this space, drop us a note in the comments!)

--

--

Building a vibrant data science and machine learning community. Share your insights and projects with our global audience: bit.ly/write-for-tds