It hasn’t been that long since artificial intelligence began its journey out of the realm of sci-fi novels and into our daily lives. Perhaps because of its recency, AI’s transition into real-world systems and technologies has been both inspiring and unsettling, a tension that is just as strong in debates around its future. What should AI become? Who should it serve?
In this week’s Variable, we share two eye-opening contributions to this conversation. If you prefer to keep things more actionable, however, have no fear: we also include some of our recent favorites on topics like MLOps and model stacking. Let’s get to it!
- Learn about the risks of corporate-led AI research. The major progress we’ve seen in recent years in areas like reinforcement learning comes at a steep cost, and tech giants like Google and Facebook have the deep pockets to cover it. Is that the right way to go about it? Travis Greene asks a key question, which he goes on to answer with nuance: "Should we trust that market-driven AI research and development in the ‘private interest’ will align with human-centric values of transparency, justice, fairness, responsibility, accountability, trust, dignity, sustainability, and solidarity?"
- Explore a potential alternative to the dominance of language models. The most visible examples of AI’s recent strides are likely massive language models like GPT-3 and BERT. In a recent episode of the TDS Podcast, Diffbot CEO Mike Tung chatted with Jeremie Harris about another promising path for developing AI’s future capabilities: knowledge graphs.
- Experiment with stacking to improve your model’s performance. If you only have time to tinker with one hands-on tutorial this week, it might as well be Jen Wadkins‘s step-by-step intro to model stacking, an approach that boosts outcomes by taking predictions from several different models, and then using them "as features for a higher-level meta model."
- Get comfortable—or at least better acquainted—with MLOps. Machine learning operations has been a buzzy subfield for a while now, and Yashaswi Nayak‘s extremely accessible guide is a wonderful resource for anyone who’d like to learn more about it. Yashaswi begins with basic definitions and then walks us through an entire MLOps lifecycle from infrastructure to deployment.
- Prepare for a graduate degree with practical, hard-earned advice. If you’ve been contemplating going back to school for a master’s in data analytics or Data Science, you’ll want to catch up with Isabella Velásquez‘s account of her own experience a few years back. It includes many practical insights that can set you on the right path and help you make the most of your program.
Thank you for joining us on another week of exciting and thought-provoking articles! If any of these posts inspires you to write your own take on the future of AI, Machine Learning, or another topic entirely, consider sharing it with our team.
Until the next Variable, TDS Editors
Recent additions to our curated topics:
Getting Started
- Great Expectations: Always Know What to Expect from Your Data by Khuyen Tran
- 5 Tips to get Your First Data Scientist Job by Renato Boemer
- LLE: Locally Linear Embedding – A Nifty Way to Reduce Dimensionality in Python by Saul Dobilas
Hands-On Tutorials
- Render 3D Buildings in Geospatial WebGL Visualizations by Charmaine Chui
- Knowing the Present and Future of Your Crop by Fetze Pijlman and Tomas Izquierdo Garciafaria
- 3 Steps for a Successful Data Migration by Mark Grover
- Understanding Python Imports, init.py and pythonpath – Once and for All by Dr. Varshita Sher
Deep Dives
- Not Merely Averages: Using Machine Learning to Estimate Heterogeneous Treatment Effects (CATE, BLP, GATES, CLAN) by Lucas Kitzmüller
- Creating Generative Art NFTs from Genomic Data by Simon Johnson
- A Machine Learning Algorithm for Predicting Outcomes of MLB Games by Garret Nourse
- Differentiable Hardware by CP Lu, PhD
Thoughts and Theory
- A Step-By-Step Guide to Approaching Complex Research Projects by Tal Rosenwein
- Bilinear Pooling for Fine-Grained Visual Recognition and Multi-Modal Deep Learning by Konstantin Kutzkov
- Illustrated Difference between MLP and Transformers for Tensor Reshaping in Deep Learning by Patrick Langechuan Liu
- Confirmatory Factor Analysis Fundamentals by Rafael Valdece Sousa Bastos