top of page
Writer's pictureChris Lele

Why Aren’t More of Us Using AI?


Something I’ve been wondering lately: At what point do most of us actually start using AI in our daily lives?


After all, ChatGPT and similar models are only getting better by the day. Wouldn’t you think at this point they’d be an integral part of our workflow? That there’d water cooler talk around, say how Claude 3 differs from ChatGPT 4 across a variety of benchmarks?


If that last sentence didn’t even quite make sense (Claude, who?), then this speaks to the very disconnect between LLMs like ChatGPT and their actual adoption. If they are so revolutionary, ground-breaking and transformative (favorite words, incidentally, of these LLMs), then why are there so few power users out there?


In this piece, I’ll explore these questions and throw out a few predictions.


Diving into the Disconnect

Recently, I was listening to a New York Times podcast by Ezra Klein about the disconnect between AI’s seemingly omnipotent wizardry and the fact that many are barely using it to its full capability–if they are using it at all.


This phenomenon is surprising, given all the buzz that ChatGPT 3.5’s debut generated back in November 2023. After the gee-whiz factor over ChatGPT subsided, many seemed to have fallen into either the camp of “this is mostly hype” to “I’m not really sure how to incorporate this in my day to day.”


It is in this latter camp that Ezra Klein’s podcast focuses. He cites his own personal journey, describing how he thought it was a neat tool, but more along the lines of generating parlor tricks than useful outputs. He couldn’t quite wrap his head around how he’d integrate a tool in his daily work routine, regardless of how tickled he was by the fact that it could cough up quasi-Shakespearean sonnets on the fly.


Do-it-yourself

His guest, Ethan Mollick, has worked with LLMs like ChatGPT since their inception. His take: Erza, and by extension most of us, weren’t using these tools to their utmost potential because we didn’t quite know how to. What we should do: go out and spend ten or so hours experimenting with them ourselves. That way, we can “autodidact” our way to mastery.


It seems, amidst all the breathless hype, that what we were missing was a simple user’s guide.


One might think that OpenAI, the creator of ChatGPT, and other LLM creators might furnish a user manual. But even they don’t know exactly the extent of how the models work and the numerous use cases they can be employed for (Sam Altman was apparently astounded when he learned that ChatGPT could code–since that hadn’t been built into any of its hyperparameters.) Anyhow, these companies are in the business of creating and disseminating these models, not for teaching us how to use them.


The host did an excellent job of distilling the best prompting practices to really get the most out of ChatGPT–and I say this as someone who has spoken at LLM conferences and has probably spent a little too much time playing around with ChatGPT.


But I imagine those listening to the podcasts did not run out and start experimenting with ChatGPT, as Ezra’s guest had mentioned.


To do so, does take time, especially if you are venturing out on your own. Am I doing this the right way? Is this what the host meant?


By the way, you are never alone in your questions with ChatGPT. I know this might sound a tad creepy, but you can always ask it these questions and even have it evaluate are you using it. Okay, perhaps more than a tad creepy.


Still, even with that useful little tidbit I just offered, people aren’t rushing out to use it because they might continue to think, is this really for me? Is the AI output really adding value to what I’m doing? And some might wonder, understandably: by adding AI to my workflow or my creations isn’t somehow diluting me, my very individuality and personhood?


I don’t think the guest necessarily answers those questions but he did touch on something interesting: when email and the internet first debuted, it would’ve been rare to see someone past middle age hanging out in AOL chat rooms. Likewise, with AI, those who are Gen X and beyond will probably have many a qualm about AI, and even millennials might be a little ambivalent. But those who grow up with AI will likely come to see it as the natural order of things, so much so, that having an AI friend (something anyone over the age of 15 will likely think is strange) will seem perfectly normal.


So why haven’t we all become master ChatGPT users?

Returning to our usage, or lack thereof, of LLMs, I have several thoughts. I think more and more people are going to be invested in learning how to get the most out of these machines. They will come to accept that ChatGPT and the like have become part of working selves and that if they are not using it, somebody else will.


I imagine this playing out over the next year or so. We already see Udemy classes around using LLMs filling up and new players entering this field with customized courses on getting the most out of LLMs. The disconnect between AI powers and so few people using them will continue to diminish over time, until it will seem foolish not to proactively want to learn how to use them.


But then, I see a reversal in this. The AI will become so good at reading our intentions, knowing who we are, that wielding these creative prompting techniques that are all the vogue on Udemy and the like will come to seem archaic, much the way that knowing how to drive a stick shift without power steering has.


Instead, I see each of us having our own personalized AI ‘brain” that has a running store of all there is to know about us, from years of accumulated interaction with us (both verbal and typed). There’ll simply be no need for prompting know-how.


What does the future hold?

That leaves us at this strange point, and one I think we’ll want to carefully navigate, even if we won’t be able to completely put the genie back in the bottle. We will have these new hybrid selves–our brains changed by our interactions with AI–interacting with other hybrid selves. How comfortable are we with this new reality, how comfortable are we that we won’t quite be the same anymore?


We should answer these questions now and draw some hard lines in the sand.


What we don’t want is that we come to see the current consternation over people not incorporating AI as quaint and a bit ironic — assuming our future selves even have the wherewithal to bemoan the fact that we’ll never be able to go back to the way we were before.

1 view0 comments

Recent Posts

See All

תגובות


bottom of page