I Went to Sleep a Data Scientist and Woke Up an AI Expert

Created by GPT5

And now my beauties, something with poison in it I think, with poison in it, but attractive to the eye and soothing to the smell . . . poppies, poppies, poppies will put them to sleep. Sleep, sleep, now they’ll sleep.
— The Wicked Witch of the West, The Wizard of Oz (movie, released 1939)

For old time’s sake, you can go watch the scene…I’ll wait.

Poppies Scene Here

Today, everyone seems to be an AI expert. Every social media feed features some "AI influencer" determined to teach or confuse you about AI. Whatever you think you know about AI, it’s never enough.

The difference for me is that AI came to me; I never had to move.

There’s an old joke about a couple riding in a big pickup (obviously an American joke).

The wife says, “ya, know honey, when we were young, we used to sit all close-up and lovey-dovey, and now you sit way over there and I sit way over here.”

The old man is quiet for a bit, then says, “Yep, but I never moved.”

So, Yep, I never moved.

AI slid across the big ‘ol American pickup bench seat right into my lap!

Ok, enough bad analogies from falling asleep in poppy fields to pickup trucks (just trying to prove this wasn’t AI written, it would never be this lame…)

Throughout the 1980s and 1990s, the major breakthroughs in data and forecasting were neural networks, random forests, and deep learning. When I went to graduate school in the early 2000s, these were the buzzwords we had to use to get funding and show we were doing new applied data analysis.

By 2010, data science was cool. Working in big data was cutting-edge, so much so that the White House founded the 2012 Big Data Research and Development Initiative. (Yep, imagine that, a White House investing in science!)

We were experts at Natural Language Processing (NLP), teaching computers that "bank" could refer to either a financial institution or the side of a river. We mastered Machine Learning (ML), training computers to differentiate pathways without hardcoding processes. Neural Networks were cutting-edge research. We were building what we called Large Language Models (LLMs), yet they were tiny by today’s standards.

This was all data science. We weren't doing artificial intelligence.

But in 2017, there was a quiet tremor that occurred in the computing world. A Google research paper called "Attention Is All You Need." This nine-page paper fundamentally revolutionized everything we're doing today. It introduced the Transformer architecture and self-attention mechanisms that underpin all current AI models.

Without this paper, we wouldn't have today's AI Revolution.

But at this point, it’s still a quiet revolution.  The foundations had shifted, but the consequences weren't fully understood.

In June 2018, OpenAI released the early Generative Pre-trained Transformer (GPT-1) model, providing one of the earliest large language models built on Transformer architecture. But GPT-1 was only known in small academic or technical circles.

In 2017, when one of the most important academic papers of the past fifty years came out…I had no idea. I was restoring a historic building on a mountain in West Virginia, living in a town of 500 people, completely unaware of the coming storm.

How we found it…

Same sink as above!
see www.172sprucest.com

By 2020, I was consulting on data analysis and back-end data processes. And while we were all learning to mute and unmute on Zoom, GPT-3 quietly launched and started writing better emails than most humans, although only accessible through an API.

 I heard about OpenAI through friends, but I wasn't testing the models yet. But the summer of 2022 changed everything when OpenAI released "text-davinci". I soon started to test the model regularly.

In November 2022, the first major public awareness occurred around the launch of ChatGPT, the first easily accessible public chat version.

I was completely shocked. A text analysis and taxonomy creation project that would have taken teams of programmers, linguists, data scientists, and graduate students months was happening on my laptop in minutes. That's when I personally realized everything had changed.

Anyone could access these models in a simple and basic interface using natural language. The Data Science gatekeepers were no longer needed.

At the time of this writing (fall 2025), we've only had publicly available AI chat interfaces for less than three years! I have socks significantly older than today’s AI programs!

Now "AI" describes everything we've worked on for years. Some is new, but most is built on what we've used for decades. Underneath all this "AI magic" are the same mathematical principles we've been using all along.

See the chart I created below that steps through major computational breakthroughs and AI developments.

Neural networks are just matrix multiplication and calculus.  Transformer architecture is a neural network that excels at handling sequences. And Attention mechanisms, they are weighted averages, just basic statistics.

We didn't become obsolete; we became the foundation. Every AI breakthrough stands on decades of machine learning research, statistical modeling, and good old-fashioned regression analysis.

I went to sleep in a large field of Big Data Science and woke up in a lush AI forest. The best part about this? I never actually had to move.

The forest just grew up around me.

Chart demonstrates the separate pathways of traditional data science and AI. It is only in the past ten years or so when there has been the convergence between the two fields.

First version created by GPT 3, then rewritten in SVG by me since it couldn’t get the lines even!

 

Previous
Previous

This isn’t WarGames or Why you need to know programming to save AI.