Dissolution of Trust in the A.I. World

I've written a fair bit over the last few weeks about the developments in A.I., its potential impact on jobs and the response required by leaders. Today, however, I want to discuss the real threat of A.I. to destroy trust across the broad spectrum of society. 

With the release of Chatgpt we are already starting to ask questions about who wrote the work we are presented with. Education institutions are already flustered about Chatgpt's ability to write students assessments, and so they should be. Everyday there are more and more people coming out with suggestions on how to ask better questions of Chatgpt enable get a higher quality response. All of this and it has only been out for about 3 months. Just wait until we will have an even more advanced system with intent access. 

Who Wrote What

Of course Chatgpt is not the only one of the AI systems out there that can write responses to various questions, and to be realistic about what is going on, the question about who wrote what has always existed. Plagiarism has always been the scourge of academic assessments. Students, teachers and professors get caught using someone else's work and trying to pass it off as their own. For students, they are made to redo their assessment. But for a professor or doctor who gets caught plagiarising, it is the end of their career. All of their past and present work gets scrutinised and there will always be a question of what else they plagiarise. 

Another issue of who wrote what comes under ghost-writers. A ghost-writer is paid by someone to write a book that they can then claim as their own work. This is quite common with the rise of eBooks. There

are many sites that have eBooks, and more, for sale that people can claim as their own work and then sell it online. A more famous example is that of George Lucas and the Star Wars novelisation. It is widely known that Alan Dean Foster wrote the novelization of Star Wars George Lucas to be able to put his name to it. (Yes I am old school. I still call the 1977 movie Star Wars without any subtitle) 

While unlike plagiarism ghost-writing is acceptable by the wider community, when a person is found to be using a ghost-writer, they do lose some trust as people wonder how credible the “author” really is. However in these circumstances of plagiarism and ghost-writing the distrust created was local. That is, only the person who did the wrong thing became untrustworthy. AI can now not only affect the trust of the individual that uses the technology, but the individual can now create media that can destroy the trust of other people on a far grander scale.

Deep Fake to Outright Lie

Moving on from written work, the potential for AI to destroy trust in society grows exceptionally. Most people are already aware of deep fake videos. In most cases with a deep fake video it has simply been someone swapping faces from one actor for another and they are normally harmless. As long as the actors can take a joke, it can be fun to see a deep fake video of Arnold Schwarzenegger as Rambo or Sylvester Stallone as the Terminator. 

I know that it has not all been fun and games. There have been political videos created to cause disruption in elections. There have also been deep fake videos of innocent people where their faces have been put on to a porn actress and it has been released to the internet. 

However, while none of these are nice and people have been prosecuted, for most people in the world, they don't have the knowledge or time to create a deep fake video. So the amount of actual deep fake videos that go out online are quite low, but that is about to change. 

We all now have access to picture generation A.I. systems. These systems are just like ChatGPT. All a user has to do is type in a command of what they want to see and the A.I. will create the picture. Some of them that I have explored are fun to play with and give a very realistic picture. I also know that there are already A.I. video creation software available to be test by the general public. 

Then just recently, I have been playing around with AI.. voice generation. These AI programs can take any voice and in a matter of seconds, yes seconds, recreate it to be unrecognisable by the average person. In the last Star Wars TV show Obi-Wan, James Earl Jones was credited for doing the voice of Darth Vader, as he has done since the 70’s. However, Jones didn’t do the voice, it was A.I.. The A.I. system was training to recreate his voice, and if I had not been told, I would never have picked it. 

Now, what does all this mean? We will soon have written text, audio recordings, pictures and videos all out on the internet that most of us will not be able to discern from the real thing. So the question is, who and what do we trust? How will we tell what is the real thing and what is A.I. created? 

Trust abused

We already know that weak people and groups use the internet as their weapon of choice to attack and discredit people that dare use facts to prove their ideology flawed. What will happen when they get their hands on audio, picture and video creation software? What will these cowardly anonymous people do in order to destroy the reputation of people or companies that don't conform to their way of thinking? 

How will A.I. be used and abused in the political arena? We are very well aware of the disgusting and dirty tricks they all play to smear the reputation of their opponents. How will they use this technology when running a campaign for an election?

How will individuals use it to create their own movie? Get a script written by Chatgpt based on a book that has not been made. Then use the video and voice generation software to create the video. You could have Tom Hanks and Charlie Chaplin team up for a comedy. Or worse, Margret Thatcher, Ronald Regan and Mikhail Gorbachev team up for a science fiction western. More seriously, you could undermine Hollywood by producing a movie of the same book they have purchased the rights by creating a movie for far less than they are and have it out far quicker than they can. 

In personal cases you could have it used to create videos of husbands or wives cheating on their partner in order to win court cases. We could see the rise of fake videos of people claiming they were assaulted by someone else so that they can sue the person. Narcissistic sociopaths may use it just to stir up trouble and to play the victim to get more attention. Or fake videos of police assault. And the list goes on. 

Now, none of this may come to pass, and what I have written here may be my wild imagination. However, what it all comes down to is the dissolution of trust in nearly everything we will read, see and hear. Unfortunately it is this simple, if you don't know which piece of information is incorrect, then you can't trust any of it. Even if you don't care about A.I.  or never use it, you will be affected by the distrust it will create. Because as fake A.I. produced media ramps up, and it will, we, the average people, will not be able to know what information is correct and what is A.I. created.

This then becomes a real problem because, for a society to work we need trust. Every society in its basic format is just an expanded trust relationship, and part of that trust is built on honest communication. To be able to operate in the world we currently live in, we have to have a general trust that people will do the right thing and treat each other with respect and reasonable care. However, if we can no longer trust one part of society, the media in all of its forms, we don't have a relationship, and if that is the case then we cannot operate as a society.

So what do we do?

Therein lies the real problem, and I can tell you, I do not have all of the answers.  Then again I wouldn't trust anybody, real or A.I., that said they did. This is a very complex issue that is still developing and will still be developing for many years to come. 

It is clear that we will require laws, many based around privacy and copyright, to be able to proceed safely with A.I.. However, we have to ensure while rights and privacy are protected, any new law cannot be too draconian as to stop investment and development in AI. Because even though there will be issues with AI, it still has far more benefits. 

I would like to say that we need to progress slowly, but I don’t think we can. A.I. is developing at a very fast rate, and once we can better use A.I. to develop better A.I. the speed of development will be exponential. So we need to move quickly, but calmly and rationally and to remember at all times, humans must come first. 

What I do know is this. Regardless of some people talking about the doom and gloom that A.I. can bring, I really don’t think that this is the end of the world as we know it. However, it will reshape our world, especially the stuff that is still in development which is likely to be released in the next few years, and out the other side of this the world will look totally different. But that may take another hundred or so years.

Thankfully while all of these things are yet to really come to pass I know that there are people out there who are smarter than I, and hopefully they will be able to figure this out all of the issues. But I can say one thing. AI may be good online but that is where it stops. If you want to know you are getting the genuine article you can always see the person live. 

Mind you, we haven't even discussed robotics yet…..

Terry Shadwell

Helping people help themselves so that they can lead a greater life. 

Previous
Previous

Leading When the Paradigm Shifts