I Went to Sleep a Data Scientist and Woke Up an AI Expert
Today everyone seems to be an AI expert. Every social media feed features some "AI influencer" determined to teach or confuse you about AI.
The difference for me is that AI came to me, I never had to move.
Throughout the 1980s and 1990s, the big breakthrough in data and forecasting was neural networks, random forests, and deep learning. When I went to graduate school in the early 2000s, these were the buzz words we had to use to get funding and show we were doing new applied data analysis.
By 2010, data science was cool. Working in big data was cutting edge—so much so that the White House founded the 2012 Big Data Research and Development Initiative. We were experts at Natural Language Processing (NLP), teaching computers that "bank" could refer to either a financial institution or the side of a river. We mastered Machine Learning (ML), training computers to differentiate pathways without hardcoding processes. Neural Networks was cutting-edge research. We were building what we called Large Language Models (LLMs), yet were tiny by today’s standards.
This was all data science. We weren't doing artificial intelligence.
But in 2017 there's a quiet tremor that occurs in the computing world. A Google research paper called "Attention Is All You Need." This fifteen-page paper fundamentally revolutionized everything we're doing today. It introduced the Transformer architecture and self-attention mechanisms that underpin all current AI models. Without this paper, we don't have today's AI Revolution.
But at this point it’s still a quiet revolution. The foundations had shifted, but the consequences weren't fully understood. In June 2018, OpenAI released the early Generative Pre-trained Transformer (GPT-1) model, providing one of the earliest large language models built on Transformer architecture. But GPT-1 was only known in small academic or technical circles.
In 2017, when one of the most important academic papers of the past fifty years came out, I had no idea. I was restoring a historic building on a mountain in West Virginia, living in a town of 500 people, completely unaware of the coming storm.
By 2020, I was consulting on data analysis and back-end data processes. And while we were all learning to mute and unmute on Zoom, GPT-3 quietly launched and started writing better emails than most humans, although only accessible through an API.
I heard about OpenAI through friends but wasn't testing the models yet. Summer 2022 changed everything when OpenAI released "text-davinci".
Then in November 2022, the first major public seismic awareness occurs around the launch of ChatGPT. That fall, when the public chat version launched, I was testing the model regularly. I was shocked, a text analysis and taxonomy creation project that would have taken teams of programmers, linguists, data scientists, and graduate students months was happening on my laptop in minutes. That's when I realized everything had changed.
Now anyone could access these models in a simple and basic interface using natural language.
At the time of this writing (fall 2025) we've only had publicly available AI chat interfaces for less than three years!
Now "AI" describes everything we've worked on for years. Some is new, but most is built on what we've used for decades. Underneath all this "AI magic" are the same mathematical principles we've been using all along.
Neural networks is just matrix multiplication and calculus. Transformer architecture is a neural network that excels at handling sequences. And Attention mechanisms, basically they are weighted averages, just basic statistics.
We didn't become obsolete; we became the foundation. Every AI breakthrough stands on decades of machine learning research, statistical modeling, and good old-fashioned regression analysis.
I went to sleep in a large field of Big Data and woke up in a lush AI forest. The best part about this? I never actually had to move. The forest just grew up around me.