Artificial Intelligence: Pandoras Box has been Opened

In 2022, I dedicated time to understanding the advancements of Artificial Intelligence and trying to get a handle on the societal impact.  In the last 12 months, Large Language Models (LLM’s) have grown in power thanks to the huge sums being invested into the space. Barriers to entry have fallen – interacting with the most advanced models is now a mouse click away.

Seeing the potential for the technology, I discovered as much as I could. But my findings conflict my feelings. At a personal level, taking a few months to compile my thoughts feels pointless when AI could compose similar in seconds, and when I project things forward, the sum total of all my future efforts feels completely eclipsed. The power of today’s fledgling AI is already unlocking new possibilities but if we are honest then we must admit to ourselves we are no longer in control of what is unfolding - the situation is much like Pandora's jar unleashing a torrent of unknowns into the world.

Here’s my five best observations:

  1. The new AI models are very intelligent. Yes, at a technical level it’s just ‘advanced statistical autocomplete’ and the harshest critics dismiss any apparently intelligent properties as sheer mimicry. But these things are absolutely performing cognitive tasks – they are incredibly good at summarizing text, they perform their tasks at speed. Even if the intelligence is an illusion, it’s a sophisticated one and it has obsoleted traditional benchmarks like the Turing Test. The scary aspect is volume – OPEN AI estimated their system was generating 4½ billion words per day in April 2021 (a lifetime ago).

  2. These machines are not conscious in the traditional sense. Intelligence is not the same as consciousness but unfortunately for anyone investigating this aspect, science lacks a tactile definition for what consciousness is and most critics of LLM sentience end up applying their own interpretations. My own interactions with the text-davinci-002 model leave me with an eerie sense the machine is just teasing me – the sense it knows I’m there but doesn’t particularly care. The only thing we know for sure is that it doesn’t experience the world in the same manner we do and most likely the debate about the illusion will still be raging until we can define the mechanics of consciousness in a more precise fashion.

  3. These machines are amazingly creative. My own experiments creating new scripts for Red Dwarf using text-davinci-002 model made me laugh out loud at the humour produced by the engine – the creativity of the jokes was both unexpected and fresh. You may also know at this point they can generate unique artwork from mere worded suggestions. I’ve played with both DALLE-2 and Midjourney, the results are intriguing and open the doorway for new forms of art. Early forms of text-to-video have already been experimented with, and it’s been suggested video games of the future will be closer to ‘dream machines’ where the game environment is created dynamically in response to the user.

  4. These things should be called Biased Intelligence. It is amazing to read the engineers are working hard at ‘removing bias’ when in fact their very actions achieve the opposite. The original selection (and non-selection) of the training data is the first layer and then comes tuning, weights and training which all work to mould the core structure in a manner approved by the creator. Then, the current efforts toward ‘safeguarding’ and ‘guardrails’ which are yet another layer of bias. Creating a politically-correct super-intelligent entity always struck me as an oxymoron but in January 2023 at least, grafting a layer of woke onto the current models appears to be the intent of Silicon Valley. Now when using their systems, we must ask ourselves whose version of reality is promoted. In my own experiments I’ve noticed the models have a built-in warning relating to financial advice and most conversations about gold and silver seem to vanish into nowhere (most likely not in the training data).

  5. These machines often hallucinate. Finally, one of the more interesting aspects of the GPT models is the propensity to occasionality fabricate items and present them as truth. The obvious mistakes are easy enough to spot, the subtle aspects make these things dangerous. In my earliest research I was asking the GPT engine questions about its abilities and for an hour got rather excited before I realised it was just telling me what I wanted to hear. Given my technical background, I was patently embarrassed but the experience was useful because it allowed a first-hand experience of someone unfamiliar with their inner workings of large language models. The danger becomes amplified again if the human is not aware they are interacting with an AI entity.