I woke up this morning when a friend messaged me with a vague question "Why do we (as humans) develop AI?". It is a difficult question, as everytime we meet other researchers we see the different reasoning behind their endeavors. Pride, satisfaction, sense of achievement, challenge, money, promotion, prizes; every individual is driven by distinct motivation. Everyone sincerely wants to make the world a better place.
It would be compelling to combine those motivations and assume that we are in good hands. However, no one has any control over the direction of development. Motivations and values are subject to change, and thus we can suddenly find ourselves in a very unfavorable future scenario.
Competition is not negative by definition, but it can have some very real negative consequences. The problem is well demonstrated in Scott Alexanders (2014) blog post from a few years back "Meditations on Moloch". Simply put, there is always a certain percentage of people who are willing to throw their values under the bus in order to achieve the greater possibility of survivability.
Even the most significant institutions of today are incapable of controlling the technological development. The church was once an institution that had control over technology, but those days are long gone. In the recent history, the nation states have taken over the role of the church and the responsibility to serve the best interest of the people. However, the pace of technological change is too fast, and the ruling institutions fail to keep up.
While we have seen some interesting openings from large institutions like the European Union, new technologies are adopted faster than sufficient regulation is in place. For example, the European Union realized that data is a very valuable asset - which should be owned by people, not by overseas business - and created the new GDPR to slow down the concentration of data. Only they acted 10 years too late.
Finally, we have the academia. Academia surely has processes which make sure that the development of AI follows certain ethical principles and guidelines? Not so much. The best researchers are moving to the big tech giants and organizations that can offer them not only higher salary, but also a higher quality environment for doing research.
Data is the key to working with AI and it is concentrating into the hands of a few. It seems obvious that the researchers should then go to work with those "few", as if they don't, they can't advance in their research and career. This brings us to the first point, that even though these individuals are driven by making the world a better place, they have to work for these businesses. And the corporations are there to make a profit, which easily leads to questionable outcomes.
Even if the ones in charge have good intentions, they still have to comply with the survivalist analogy. In the modern business world, we have a very competitive environment where survival of the fittest is the rule. Technology is the key to staying afloat; every business needs to hire the best researchers to work and develop artificial intelligence for them.
The circle closes and we realize, that no matter the individual motivation, no one is in charge of the development of AI. No one can control it.
The god of technology, one that promises vast richnesses to everyone who pledges their loyalty and effort to develop it, is doing it's work just to consume us into meaningless digital perambulation. Once we have found the secret to the singularity, we will unleash it to have endless reign over the world.
I must say that the outlook I give is pretty harsh, but this text is mainly here to raise discussion, not as a scientific introduction. I really suggest reading the "Meditation on Moloch" as it will give you some extra insight on this opinionated piece!
-Tapio Vepsäläinen