A series of interviews highlighting the incredible work of writers in the space of Data Science and their path of writing.

"So long as you write what you wish to write, that is all that matters; and whether it matters for ages or only for hours, nobody can say." ― Virginia Woolf, A Room of One’s Own
In an endeavor to bring notable work in the field of Machine Learning to the forefront, I started an interview series last year. During the first season, I presented stories from established data scientists and Kaggle Grandmasters who shared their journey, inspirations, and accomplishments. For the second season, I’m interviewing book authors. As a writer myself, I have tremendous respect for people who write books. A single well-written article takes a lot of time, energy, and patience, and to replicate the same for a book is no mean feat. This edition of the interviews will bring to light the story of some of the well-known authors in the data science field.
Meet the Author: Radek Osmulski
The Fastai community is well known for giving the world not only means to get into machine learning but also great researchers from time to time. Radek Osmulski is one such fastai-taught AI Research Engineer. He worked for several startups from Silicon Valley, Australia, and Dubai. In 2018 he won a Kaggle competition sponsored by Google. What may come as a surprise, Radek doesn’t have a formal background in math or computer science. There were many things about learning machine learning and becoming employable that he had to figure out. He documented all his learnings in his book →Meta Learning: How To Learn Deep Learning And Thrive In The Digital World, In this interview, we’ll learn more about his idea behind the book, motivation, and advice for the data science enthusiasts and writers. Let’s dive in.
Q: How did the idea of this book originate?
Radek: There are so many things that are surprising about learning machine learning.
Initially, you might feel like you are putting in a lot of effort into learning ML but are making very little progress. You might go through a set of lectures or complete a MOOC. But when confronted with a real-life ML problem, you might have a hard time figuring out which way is up.
You look back at the last six months, and while you may see that you have learned a lot in the academic sense, your ability to impact the world around you might have changed only slightly.
This was precisely my experience. I spent several years in this limbo state of learning, not having any impact on the world around me. There were no employers banging on my door. I tried my hand at Kaggle but found it all very confusing.
I wanted to throw in the towel and quit along the way a couple of times. So I didn’t touch ML for five months straight.
But then I decided to give it one last shot. This time I would not do it my way. I would go out of my way to study what worked for others, how others learned and interacted with the community, and how they arrived at amazing outcomes.
And this made all the difference. The results were spectacular.
Very soon I won a Kaggle competition. I have since held several very decent deep learning roles working with some of the most interesting people I have ever met.
I never expected my life would go this way. I wrote Meta Learning to share the ideas and techniques that led to this surprising outcome.
On a more personal level, my motivation for writing the book was to stay in touch with the fastai community. There was a time when I would post to the fast.ai forums a lot, and I certainly miss that. I was also looking for a sense of closure to an adventure that I devoted eight years of my life to.
Q: Could you summarize the main points covered in the book for the readers?
Radek: Sure! I strove to make the book as concise as I could. Over the 90 pages or so, I discuss things like the interplay of theory and practice in learning, what being a good developer is all about, the importance of the tools we use, what lies at the core of machine learning, etc.

But the one overarching idea is that you can achieve whatever you set your mind to. I mean this in a very concrete, literal sense. You can master machine learning regardless of your background.
But you cannot do it by just throwing willpower and effort at it. This will not work.
You do not have to set bombastic goals, be extremely confident, or very talented. Many of these things are in the eye of the beholder anyhow.
But what matters is what methods you employ and that you continue to learn.
Q: Who do you think is the target audience for the book?
Radek: Anyone who considers themselves a student of machine learning.
This is a very broad category. On the one hand, Meta Learning is intended to take an absolute beginner and show them exactly what they need to focus on to make quick progress.
This can be especially helpful to someone who has been learning machine learning for a while or is just starting out.

But you might also be a seasoned pro and still benefit greatly from reading the book.
We all have our blind spots, areas where we are weaker. You do not become better by employing fancy techniques but by working on the fundamentals.
Plus, regardless of your background, opening yourself up to how things are done on the Internet can be a very valuable proposition.
Q: How can a reader make the most out of this book?
Radek: Read it. Take notes on the ideas you encounter. Discuss them with your friends. Think about them when taking a shower—tweet about them. Make them your own.
Give the techniques an honest go. What is working particularly well for you? How do you feel putting this or that technique to practice?
It is through activities that allow us to reflect on the experience and to put what we are learning in our own words that we learn the best.
Q: What advice would you give a new writer, someone just starting out?
Radek: Writing is just like machine learning.
There is a natural flow to working on a machine learning project. You first come up with a way of validating your results. You then build a simple model. As you iterate on all components of the pipeline, you improve your model’s performance while ensuring your model can generalize well on unseen data.
The model architecture is not all that important. Likewise, it matters very little what programming language you use to implement your solution. Or how much math you know. The approach itself dictates to a great extent whether you will get a decent result or not.

Similarly, there is a natural flow to writing that you might not be aware of when looking from the outside in. I found it very helpful to learn how writers think by reading their books on writing (On Writing by Stephen King, The War of Art by Steven Pressfield) and listening to podcast interviews with writers by Tim Ferris.
If I were to summarize the main ideas, here they are.
The time you give to writing is what matters. All first drafts are universally horrible. Good writing is bad writing that you continue to revise. It’s more important what you have to say than how you say it. Have a small notebook on you at all times to write down ideas as they strike. And last but not least, no piece of communication is ever perfect, but to paraphrase Steve Jobs, what makes a good writer? That they ship!
Q: How long did it take for you to finish writing the book? More importantly, how did you manage to write a book along with your job?
Radek: It took me eight years to write the book. Every day I would research a problem slowing me down in my quest to learn machine learning. Or I would experiment with a technique I haven’t used before.
I would talk about it all online via posts and tweets.
It felt like I had this book inside me, and it literally jumped out on a page. I wrote the first 60% of the book in 3 weeks, much of it over the winter holidays.
The rest of the book didn’t take too long to write either, but it was more stretched out over time.
Q: Do you have a favorite book and author (in technical or non-technical space)?
Alexey: There are so many books out there that I love!
But a book I would like for everyone to read is The Origin of Wealth by Eric D. Beinhocker. It discusses how you can use algorithms to come to very meaningful conclusions about the surrounding world. More importantly, it deconstructs economics as a science and shows where it went wrong.
As Yuval Noah Harari puts it, we humans live in a world built from abstract ideas. No one has ever seen a corporation or intellectual property walk down the street, but these and other concepts alike shape the world we live in. There is nothing inherently fixed about these ideas, and over time they have significantly evolved. By extension, if we could replace some of the ideas with better ones, the world around us would start looking differently.
And nothing has contributed to the current state of the world with deeply entrenched inequalities and the destruction of the environment as much as the ideas we hold on human nature (behavioral economics) and economics in general.
I am optimistic in the sense that history shows we are capable of replacing the ideas we use to organize ourselves, but unfortunately, we are quite inefficient in doing so. The process is lengthy and full of turns that do not lead anywhere, not unlike evolution.
👉 Are you looking forward to connecting with Radek? Follow him on Twitter.
👉 Read other interviews in this series:
Don’t just take notes – turn them into articles and share them with others