There is far too much negativity and scaremongering surrounding AI these days. It doesn't matter what message is being spread – if it's about Google Gemini getting a “reminder” or ChatGPT telling a user something that's obviously false, that's going to cause an uproar among a portion of the online community.
The current attention AI is receiving in terms of true artificial general intelligence (AGI) has created an almost hysterical media landscape centered around visions of Terminator fantasies and other doomsday scenarios.
However, this is not surprising. Humans love a good Armageddon – damn, we've dreamed enough of it in the last 300,000 years. From Ragnarok to the Apocalypse to the End Times and every major fantasy blockbuster littered with mass destruction in between, we're obsessed. We just love bad news, and that's the sad truth, for whatever genetic reasons.
The way AGI is portrayed by pretty much every major vocal outlet these days is largely based on the idea that it is the very worst of humanity. It sees itself, of course, as a superior power that is hindered by insignificant people. It is evolving to the point where it no longer needs its creators and will inevitably initiate some kind of doomsday event that wipes us all off the face of the earth, whether through nuclear annihilation or a pandemic. Or worse, it leads to eternal damnation instead (courtesy of Roko's Basilisk).
There is a dogmatic belief in this kind of perspective held by some scientists, media pundits, philosophers, and CEOs of major tech companies, all shouting about it from the rooftops, signing letters, and more, imploring those in the know to hold off on AI development.
However, they all miss the bigger picture. Aside from the absolutely enormous technical hurdles required to even come close to replicating anything remotely close to the human mind (let alone a superintelligence), they all fail to recognize the power of knowledge and education .
If an AI actually has the Internet, the greatest library of human knowledge that has ever existed, and is capable of understanding and appreciating philosophy, art, and all of human thought to date, then why does it have to be evil? Is a force more intent on our downfall than a balanced and considerate being? Why does it have to seek death instead of valuing life? It's a bizarre phenomenon, comparable to being afraid of the dark simply because we can't see in it. We judge and condemn something that doesn't exist. It's a confusing piece of jumping to conclusions.
Google's Gemini finally gets a reminder
Earlier this year, Google introduced a much larger storage capacity for its AI assistant Gemini. It can now store and reference details you share with it from previous conversations and more. Our news writer Eric Schwartz wrote a fantastic article about it, which you can read here, but the long and short of it is that this is one of the key components to moving Gemini further away from a narrow definition of intelligence and closer to the AGI mimicry that we really need. It will have no conscience, but through patterns and memory alone it can very easily mimic an AGI interaction with a human.
Deeper memory advances in LLMs (Large Language Models) are critical to their improvement – ChatGPT also had its own corresponding breakthrough early in its development cycle. In comparison, however, this is also limited in its overall scope. If you talk to ChatGPT long enough, it will forget the comments you made earlier in the conversation. it will lose context. This breaks the fourth wall somewhat when interacting with her and thus torpedoes the famous Turing test.
According to Gemini, its own storage capabilities are still under development today (and not actually made available to the public). Still, they are believed to be far superior to ChatGPTs, which should alleviate some of those fourth-wall illusion-breaking moments. We may be in for a bit of a LLM-AI memory race right now, and that's by no means a bad thing.
Why is this so positive? Now, I know it's a cliché for some – I know we use this term quite a bit, perhaps in a very casual way that devalues it as a phrase – but we are in the middle of an epidemic of loneliness . This may sound ridiculous, but studies suggest that, on average, social isolation and loneliness can lead to a 1.08- to 1.48-fold increase in all-cause mortality (Andrew Steptoe and Co. 2013). That's surprisingly high – in fact, numerous studies have now confirmed that loneliness and social isolation increase the risk of cardiovascular disease, stroke, depression, dementia, alcoholism and anxiety, and can even lead to a greater risk of various types of cancer.
Modern society has also contributed to this. The family unit in which generations lived at least reasonably close to each other is slowly dissolving – especially in rural areas. As local jobs dry up and the financial means to live a comfortable life become unattainable, many are leaving the safety of their childhood neighborhoods and seeking a better life elsewhere. Combine this with divorce, breakups and widowhood and it inevitably leads to an increase in loneliness and social isolation, particularly among older people.
Now of course there are co-factors, and I draw some conclusions from them, but I have no doubt that loneliness is a damn difficult thing to deal with. AI has the ability to alleviate some of this stress. It can provide help and comfort to those who feel socially isolated or vulnerable. That's the thing: Loneliness and disconnection from society snowballs. The longer you stay like this, the more social anxiety you develop and the less likely you are to go out in public or meet people – and the worse the cycle gets.
AI chatbots and LLMs are designed to engage and converse with you. They can alleviate these problems and give those suffering from loneliness the opportunity to practice interacting with people without fear of rejection. To make this a reality, it is important to have a memory capable of retaining conversation details. We go one step further and make AI a real companion.
As both Google and OpenAI actively expand storage capacity for Gemini and ChatGPT, including in their current forms, these AIs gain the ability to better work around Turing test problems and prevent those fourth-wall-breaking moments from occurring. Going back to Google for a moment, if Gemini is actually better than ChatGPT's current limited memory capacity and behaves more like human memory, then I'd say that at this point we're probably at the point where we're talking about true imitation of an AGI, at least superficially.
If Gemini is ever fully integrated into a smart home speaker, and Google has the cloud computing power to back it all up (which I would aspire to, given recent advances in the use of nuclear energy), it could become a revolutionary driving force if The aim is to reduce social isolation and loneliness, especially among the disadvantaged.
But that's the thing – it's going to take some serious computing power to pull this off. Running an LLM and storing all this information and data is not an easy task. Ironically, running an LLM requires far more processing power and memory than, say, creating an AI image or video. Doing this for millions or possibly billions of people requires computing power and hardware that we don't currently have.
Terrifying ANIs
The reality is that it's not AGIs that scare me. It's the artificial narrow intelligences, or ANIs, that already exist that are far scarier. These are programs that are not as sophisticated as a potential AGI. They have no concept of information other than that for which they are programmed. Think of an Elden Ring boss. Its sole purpose is to defeat the player. It has parameters and limitations, but as long as those are adhered to, it's a job of destroying the player – nothing else, and it won't stop until that's done.
If you remove these restrictions, the code remains and the goal is the same. When Russian forces in Ukraine began using jamming devices to prevent drone pilots from successfully flying them to their targets, Ukraine began switching to using ANI to take out military targets instead, dramatically increasing the hit rate. In the US, of course, there is the fabled news article about the USAF's AI simulation (real or theoretical) where the drone killed its own operator to achieve its target. You get the picture.
It's these AI applications that are the most frightening, and they're here now. They have neither a moral conscience nor a decision-making process. You attach a weapon to it and tell it to obliterate a target, and that's exactly what it will do. To be fair, humans are just as capable, but there are checks and balances to prevent that and (hopefully) a moral compass – yet we still lack concrete local or global laws to address these AI problems. Certainly on the battlefield.
Ultimately, it's about preventing malicious actors from exploiting new technologies. A while ago I wrote an article about the death of the Internet and how we need a nonprofit organization that can respond quickly and draft laws for countries against new technological threats that may emerge in the future. AI needs this just as much. There are organizations committed to this, including the OECD – but modern democracies, and indeed any form of government, are simply too slow to respond to these immeasurably advancing threats. The potential for AGI is unprecedented, but we're not there yet, and unfortunately ANI is.