Is A.I. a (real) threat to humanity?
By Jason Elder | Lead Consultant. So, is AI a threat to humanity? The answer is complicated, but in my opinion, we are still our own biggest threat… I was recently asked if I was worried about the rise of ChatGPT (large language models in general) and whether I am concerned about the loss of jobs and (ultimately) the threat of AI. Luckily for me I am a developer who delves into that mysterious art and I am not worried about my job (yet).
Getting back to the question I was asked, it allowed me to look back at our history, which is a good way of analysing our potential future. This is a similar approach to what I do in training my ML models (i.e. when predicting future trends or detecting patterns and anomalies). It was a great inflection point that led to a heated debate with some of my colleagues.
The History
Rewind 220 years to 1804 and the train came along and ushered in a new era, ultimately underpinning the industrial revolution. However, this was still not something an average person could own (use, certainly, but not own) and was limited to specific routes, the horse was still the main mode of transport as it had been for aeons. The train did, however, spur the development of smaller engines and gave people plenty of ideas to take this technology to the next level. Fast forward a few years to the late 1800’s, horses were still the main form of transport for people, but the car was born and was slowly becoming more mainstream.
During the height of the 1800’s, if we zoom into New York city, there were an estimated 150 000 horses, ultimately causing the Great Manure Crisis of 1894. Fun fact, those houses you see in movies with the stairs up to the front door in New York were designed that way to avoid the manure. Whole industries existed to support these horses, from stabling, tackling and feeding them, to the white coat gangs of New York who cleaned the streets. At this time cars were still an oddity, few and far between. In 1908, Henry Ford released the first car that was mass-produced (arguably) to be affordable and marketed to the middle class – namely, the Model T. By 1912, in just 4 years, cars outnumbered horses in New York. By 1930 horses were completely replaced.
Those industries supporting horses were wiped out in a few years, remaining now as a niche industry. I am sure those people then feared for their jobs, as do some now, but as we see 100 years ago, we pivoted from horses to cars, people found other jobs and new industries were created, all in a (relatively) short space of time. Ultimately the car solved the manure crisis, but caused its own issues, bringing a different type of pollution and contributing (significantly) to the climate crisis facing our generation.
Present Day
The rise of AI and ML is certainly analogous to the rise of the automotive industry, like the car with Ford, we have OpenAI with ChatGPT as the turning point to make this technology mainstream. And just like with the car, humanity pivoted from horses, people found or created new jobs and ultimately an industry built up around it. I think we are in a similar place, sure, some jobs will become obsolete, but others will be created. It didn’t take long for people to pivot from shopping malls and grocery stores to ordering online. Malls and grocery stores were affected, but they still remain. From this point of view, the rise of AI does not concern me, it is reminiscent of the dot-com bubble of the late 90’s. So, for me, an incredible time to be alive and to be working within the industry.
I am concerned about how quickly people are relying on these models to write or create content, blissfully unaware that the current technology is fallible and can make mistakes. Google’s Gemini knows this with images generated that were inappropriate and offensive, Microsoft with the Tay fiasco. There are many more stories out there like this. So, I do feel people need to understand the limitations and be taught the right way to use this technology.
Another concern that I have is in the way these models are trained and how easily some can be manipulated. What is stopping malicious code being inserted in training, appearing correct or harmless to the inexperienced and ultimately being used similar to the Stuxnet malware? Stuxnet was (reportedly) used to spy on, target, and damage centrifuges used by Iran in their nuclear program, through subverting the software controlling those centrifuges. Data leaks are also a very real concern with prompts pushing that to remote servers and then where does the law cover this? Does ChatGPT or Co-Pilot need to report it if someone is trying to get it to design a bomb or write code to hack a bank, or is a simple, “I can’t answer that question” response enough?
But the biggest concern for me is the race to create AGI (Artificial General Intelligence) with the promise it will solve world hunger and cure diseases (anyone remember IBM Watson). I do believe that with our current technology, without AGI, that we could solve a lot of our problems if we put our collective minds and efforts into it, so I don’t buy that reasoning. As for whether it will be evil, I don’t think anyone can definitively guarantee that one way or the other.
I was fortunate to attend the Microsoft Envision AI Connection Day in Cape Town this year and they made it abundantly clear (from their viewpoint) that this technology is a tool to assist users, improve productivity and showcased their work on addressing the concerns I raised above. Microsoft emphasised that the new “AI Economy must be built on trust” and outlined their five-point blueprint for governing AI, which I wholly support.
© Microsoft 2023-2024
I also back the call of Microsoft to have companies prioritise user upskilling, change management and the push to develop their own ethical AI policies and frameworks. So I lend my voice to Microsoft and others calling for more government oversight and regulation, as I feel that in order for use to use this technology safely, ethically and not just ‘trust’ it, the industry does need to be regulated in such a way that does not hinder innovation, but promulgates safety.
Governments are notoriously slow to act, it took a while for seat belts to be legally mandated or licences issued through a driving test to be made mandatory for cars. Those things made cars safer, not the safest, but certainly safer than if there was nothing in place. The European AI Act is a great starting place for Europe, but sadly lacking in many other parts of the world. And so while we wait, it falls on us as leaders, companies and civil society to work together to create or revise guidelines for the ethical and safe use of AI.
Conclusion
Getting back to the AI threat to humanity question, over our history we have pretty much found a way to weaponize, abuse or misuse most of the technology we have ever invented. Trains led to cars which in turn led to tanks and arguably made it easier for wars to be fought on a scale never seen before each World War. Planes went from a basic flyer in 1903 to dropping nuclear bombs in just 40 years during World War 2. Although we have not yet achieved AGI, some aspects of this technology are already being used against us and cyber security threats are increasing, both in their sophistication and in their ability to outsmart current defences. This will only continue to increase as these models mature.
Let’s say we do achieve true AGI, just because it’s an intelligence or that we created it, does not mean it will help us or even work with us. I don’t know if it would wipe us out (as an evil AI), but I also don’t know it’s going to solve all the world’s problems. We don’t code our children, we raise them, we teach them love, empathy, kindness, respect, humility and so on, knowing that this will ultimately shape who they become. I don’t think AGI would be any different if we achieved it, it would be a sum of how it is designed, but also more importantly how it is taught or trained or raised. I personally believe the slower approach of raising AGI like a child and nurturing it would be safer, giving us time to adapt, assess and understand the technology.
I do believe that our current technology (without AGI) is already helping, and that the benefits are outweighing the aforementioned threats. Used properly, it will continue to help us achieve more, just as cars unlocked more potential for humanity, so can this. I am looking forward to seeing what people create and do with this, watch it help those that are disadvantaged or those with disabilities to do and achieve more in their lives. For me, that is where the true greatness in this technology lies, empowering and helping each other and our world. And just like any other tool, the good and bad ultimately depends on the intentions of its users.
So do I think the future is exciting? Certainly, especially knowing what to look out for… Finally, forgive me if there are any mistakes in this opinion piece, this was done the old school way of no AI input of any kind and good old fashioned research on the web.