6th November 2023
Listen

AI is an aspect which is ingrained within every fibre of the modern human experience. Tools such as Generative AI, Online Machine Translation (e.g – Google Translate, DeepL) and chatbots (e.g – ChatGPT, Gronk) highlight its beneficial and problematic nature. It can be defined as “the application of algorithms” and “mortal problem-solving and decision-making skills utilising computers and other devices”. Currently, it is paving the way for huge technological strides within several industries, particularly through software applications such as ChatGPT, which released in November 2022.
As someone studying an undergraduate degree of BA Media, Data and Society, it’s fascinating to recognise the way AI is changing media productions, shaping how journalists construct and disseminate these stories to the public domain. According to the United Nations report ‘Reporting in the Brave New World’, AI is “challenging the right to seek, impart and receive information”, a sentiment which many can agree with. Many are left wondering, how far can AI be pushed? To recognise its capabilities, perhaps we should reflect on its historical context and its development.
The concept of AI is one that stems from Greek mythology via proposals of humanoid robots from visionaries such as Daedalus; however, the first pivotal step in practically developing the concept was in 1884, when Charles Babbage proposed a device designed to project a sense of intelligence. Moreover, American mathematician Claude Shannon hypothesised that by 1950, a robot could be capable of engaging in a game of chess. Despite this, research into artificial intelligence didn’t begin to pick up traction until the early 1960s.
At MIT, American-German computer scientist Joseph Weizenbaum developed ELIZA (a natural language processing program able to imitate human interactions), which is technically the world’s 1st ever chatbot – there is a certain irony within Weizenbaum’s decision to name the program ELIZA, since he remarked how it was to “emphasize that it may be incrementally improved by its users”, perhaps recognising its cultural influence. When Weizenbaum developed it, it wasn’t designed to truly comprehend contextual information or the general premise of a conversation; rather, he utilised pattern matching and rules of substitution to induce these responses from ELIZA, laying the foundation for its contemporary successors of Meta and ChatGPT.
This was significant not just from a technical standpoint but also psychologically, as users established a parasocial interaction (a 1-sided relationship where individuals believe the person they are speaking with shares a mutual understanding) with ELIZA and personified the chatbot, despite the fact that they recognised its identity as a machine. This has coined the ELIZA effect, where people assign human behaviours of empathy and understanding to computers characterised by superficial behaviour.
In May 1997, Deep Blue became the first computer system in the world to triumph in a standardised tournament match against a reigning world chess champion; this was significant in not just chess, but in highlighting the true computing capabilities of supercomputers and artificial intelligence. The technology paved the way for other sectors, such as finding new pharmaceuticals, assessing financial risk, uncovering huge database patterns and exploring human genetics on a more thorough level.
This was all thanks to IBM engineer and mathematician Alex Bernstein, who wrote the first ever complete chess program in 1957. This was iterated upon in the 1980s, when IBM Research hired Carnegie Mellon University graduate Feng-hsiung Hsu and computer scientist Murray Campbell to build a chess computer. The team’s success led to IBM in 1999 launching the Deep Computing Institute, which would use highly advanced computers to simplify challenges faced within business and technology.
In 1989, MIT engineer Joe Jones proposed one simple notion: what if a vacuum machine was developed using automated processing? To build upon this, he wanted a robot to tackle vacuuming, removing it as a human necessity. He developed early prototypes – which he showcased to prospective buyers such as Denning Mobile Robot and Bissell -, where it wouldn’t be until 1999 for his idea to come into fruition.
After receiving funding from S.C. Johnson, Jones began the development of the Roomba; this became a commercial success, with iRobot (the company behind the Roomba) wanting Jones to develop it as a reliable device which was accessible within the domestic sphere. It was a pioneering device, signifying a huge development within the home cleaning industry via technological integration. According to Robots Authority, the Roomba sold over 1 million within 2 years, highlighting a high customer demand for autonomy within domestic life.
In 2011, Steve Jobs’ household product of the iPhone introduced within the 4S model the digital assistant Siri (its name was coined by Dag Kittlaus after the condensed version of the Norwegian name Sigrid); this became the first mass virtual assistant accessible on the smartphone of a prominent tech company. It was capable of internet searches, could respond to simplistic questions, execute mathematical calculations, start calls via audio and audio-visual through FaceTime, perform language translation and offer navigation via directions. Nowadays, through iterations of the software, Siri has become far more sophisticated.
The AI assistant’s creation can be contributed to the synergy of multiple organisations such as DARPA (the US Department of Defence’s research supplement), who in 2003 funded the CALO (Cognitive Assistant that Learns and Organises) Project; this project was spearheaded by Stanford Research Institute International, who worked with Nuance Communications to operate with speech recognition software (their goal was to use speech and natural language processing to perform basic business operations; for example, book a meeting room). In 2007, Kittlaus, Tom Gruber and Adam Cheyer began a start-up operation called Siri to commodify their developed technology (it was initially an app available on the iOS App Store in February 2010, until they assimilated into Apple via an over $200 million acquisition).
In 2013, Amazon bought a Polish speech synthesiser called Ivona, which is what the majority of Alexa’s technology is founded upon. However, 2 years prior, Bezos had contemplated the first Amazon Echo device, which he wanted controlled completely by verbal communication. Amazon Executive Dave Limp found the idea a bit too ambitious and “foretold a magical experience”.
To support in improving Alexa’s ability for responses, Amazon collaborated in 2013 with Appen (a rental home company) to gather large amounts of data; to do so, they positioned several Echo devices within rooms and paid temporary contract workers to traverse through said rooms, where they would convey constructed messaging of open-ended questions from tablets. This gave them the data they desired, which could be relayed back to Alexa.
It wouldn’t be until late 2014 that Alexa was officially announced as a product, advertised alongside its new Echo device; the device became so significant to the company that as of November 2018, they had hired over 10,000 people to work on Alexa and other Amazon-related products (they revealed they had sold over 100 million Alexa-enabled devices in early 2019). This technology became transferrable into other forms, such as smart displays, home entertainment, voice control, headphones and their smart Ring doorbells.
In 2014, Microsoft introduced their digital assistant Cortana, which was designed to set reminders, answer questions and assist with various tasks; they would succeed this in 2016 with the controversial chatbot Tay, designed to learn from X users aged 18-24 and their interactions (this backfired as the chatbot began posting inappropriate tweets, such as X user Baron_von_Derp asking the chatbot if it supported genocide. The AI briefly replied with “i do indeed”, with Microsoft forced to issue a public apology for it deviating from their initial vision). This is just one case of AI being problematic.
In December 2015, several tech entrepreneurs (such as Elon Musk, Sam Altman, Greg Brockman, Illya Sutskever) founded the company, beginning as a non-profit organisation. Their intentions were to develop AI that would supposedly benefit humanity in an ethical and regulatory fashion, aiming to offer an alternative approach via accessible, explicit AI research. This was built upon the notion that AI was rapidly advancing and becoming a potential threat to humanity – often nicknamed the technological singularity -, especially if under an oligopoly (where a small number of companies dominate a market) of sorts. As their research progressed, they identified that training state-of-the-art AI models meant they needed enormous computing resources, requiring additional funding and restructuring its initial non-profit identity.
According to World History, OpenAI’s most crucial work was in natural language processing, with their development of Generative Pre-trained Transformers – or GPT – marking a pivotal change for both the company and AI industry. This was iterated through GPT-1 (highlighted capability of large-scale language models) and GPT-2 (introduced more text generation possibilities); another prominent system was DALL-E, which was able to produce images based on text inputs (this exhibited AI’s ability to produce artistic pieces that were visually stunning and seemingly realistic, leading to a current growing trend of AI-assisted design and creativity within their respective industries).
The 2020 release of GPT-3 was revolutionary, enabling applications such as chabots, automated content creation and AI-driven programming assistance. After its success, OpenAI created ChatGPT – something I’m sure we are all familiar with -, which has garnered massive popularity and projected AI into the mainstream.
A commonality within current media consumption is what French postmodernist Jean Baudrillard coined as simulacra – an artificial copy of reality that blends fact and fiction -, something which permeates throughout all of the content we as modern consumers engage with.
For instance, I was scrolling the other day on my TikTok’s FYP and was greeted with a slideshow of supposedly real stills, taken from the trailer for the new Spider-Man: Brand New Day film. Now, this user didn’t disclose that this was AI-generated, meaning many users will witness this and potentially perceive it as genuine fact (after remembering events such as Holland’s injury on set and the film not officially receiving any official announcement to my knowledge since August this year, I seriously doubted the credibility of this content).
To verify, I went onto YouTube and I found the original source (I was greeted with a high resolution but poorly rendered, deadpan Tom Holland and JK Simmons). Thankfully, other people had also identified the content was synthetic and the owner of this video had stated in their caption it was purely fan-made (I’m sure they had good intentions when creating this trailer, so this shouldn’t be seen as me criticising them per se). But I worry for our cultural ability to distinguish between what’s real and what’s fake, as even I am unsure at times. AI has become increasingly proficient in replicating realism and proclaiming it as true.
We live in a world where AI is an entity which we must address and engage with in an ethical and responsible manner. From its initial concept as a myth and speculation to real, organic machine learning, we need to recognise and take accountability for how we interact with and discuss artificial intelligence. Else, we risk jeopardising our humanity.
Any questions? Feel free to contact me via johnjoyce4535@gmail.com!
Extra reading I’d recommend: