The world’s leading publication for data science, AI, and ML professionals.

GPT and Beyond: The Technical Foundations of LLMs

Our weekly selection of must-read Editors' Picks and original features

Photo by K8 on Unsplash
Photo by K8 on Unsplash

In just a few short months, large language models moved from the realm of specialized researchers into the everyday workflows of data and ML teams all over the world. Here at TDS, we’ve seen how, along with this transition, much of the focus has shifted into practical applications and hands-on solutions.

Jumping straight into tinkering mode can make a lot of sense for data professionals working in industry—time is precious, after all. Still, it’s always a good idea to establish a solid grasp of the inner workings of the technology we use and work on, and that’s precisely what our weekly highlights address.

Our recommended reads looks both at the theoretical foundations of LLMs—specifically, the Gpt family—and at the high-level questions their arrival raises. Even if you’re just a casual user of these models, we think you’ll enjoy these thoughtful explorations.


We published so many fantastic articles on other topics in recent weeks; here are just a few we absolutely had to highlight.


Thank you for supporting our authors! If you enjoy the articles you read on TDS, consider becoming a Medium member – it unlocks our entire archive (and every other post on Medium, too).

We hope many of you are also planning to attend Medium Day on August 12 to celebrate the community and the stories that make it special – registration (which is free) is now open.

Until the next Variable,

TDS Editors


Related Articles