Technology is at the core of our lives. Technology has become so ubiquitous that often technological progress has surrogated for the more general notion of progress, hiding truths about social, ideological and philosophical progress that often underpin true progression.
It has been forgotten that the point of technology was once to create more productive lives, to give us more free time. In the same way that we have forgotten that work is meant to support the life that happens outside of it, technology has assumed a pinnacle position in our understanding of self and progress at the mutual exclusion of all else.
As we see more complex automation and abilities of AI that were once thought to be outside of their abilities, an anxiety about the place of those performing those rote tasks and automatable jobs has arisen.
Indeed, as deep learning AI becomes ever more proficient at typically creative tasks such as art generation, novel language generation and complex analysis, a larger question looms on the horizon; what if there is nothing that AI can’t do and what if that reality is closer than we realized?
On top of that, there is a more historic fear about AI that has self-improvement capacities.
If such an AI comes into being that is able to generate and increase efficiencies in itself, what if it improves itself into a state where it is not possible to prevent it from fulfilling itself original directive?
Such an AI wouldn’t even need to be an malicious AGI (artificial general intelligence).
An AI directed to create paperclips and given the ability to improve itself could viably cause a great deal of damage.
Finally, and more relevantly, commercial algorithms employed by TikTok, Instagram and other social medias leverage dangerous neurosciences to keep their users hooked.
What can a human brain (often a young human brain) do against millions of dollars of RnD designed to keep it hooked and engaged?
The hope for automation is that it will generate efficiencies in the economy that will allow people to have more spare leisure time.
This is an anachronistic view of the economy and economic actors. Prior increases in labor rights and increases to quality of life have been wrought at the hands of collective action groups and not from increases in productivity. If actors and firms in the market based economy continue to act in the same way that they have acted for the last 200 years (and the way they are heavily incentivised to), they will continue their race to the bottom and use the increases in productivity from automation to drive down wages, dilute the bargaining power of workers and implement lay-offs as soon as possible. Work places have no real incentive to do anything but the status quo as automation ramps up.
Whilst automation so far hasn’t caused mass unemployment, it is likely due to the fact that automation thus far has simply allowed the production curve to achieve maximal gains (which has driven a consumer culture). As we see production move past the maximal possible consumer demand and automation causes increases inefficiency (we can create way more stuff than we could ever possible induce demand for), we will likely see lay offs and the acceleration of the aggressive race to the bottom.
There has been a general anxiety in popular culture that AI or AGI (artificial general intelligence) would attempt to take over the world. This fear is predicated on the human experience, that intelligence has a tautological relationship with dominance and hierarchy. Much more likely is that hierarchy is an evolutionarily acquired value. As a form of intelligence that has not been through the Darwinian ringer, it is difficult to say what values and motivations an AI or AGI would actually have. Whilst it is possible that some human values may be imprinted on them during the production and training process, it is also equally likely (perhaps more) that AI and AGI would lack some of the defining features of the human experience. For example, it may not have an innate desire to survive, an innate desire to reproduce and may lack any clear understanding of hierarchical structures. It may also lack an inherent valuation or the ability to provide a valuation of life and preservation in any form, not having a preference between the world being changed in any specific way to achieve its goals. All of these possible ontological blind spots mean that an AI, if given the ability to self-improve and instructed to maximise its directive could be extremely dangerous.
Much of the human experience has moved online and so with this migration, the epoch of the algorithm has begun. At its most simple, an algorithm is a set of instructions. Baking a cake is an algorithm, telling someone instructions on how to drive to a specific location is an algorithm. The power of algorithms is derived from their ability to learn from our behaviours, adapt to encourage the right behaviour and then adapt again. Algorithms are the product of enormous sums of research and development and the greatest minds in the tech industry. Against these super motivators are relatively under-equip minds. Even without being coaxed and manipulated, the human mind is capable of some fairly large cognitive failures. Against the ability of algorithms too quickly and comprehensively alter the neurochemistry of our brains, we should expect a highly uneven contest. It is important that we are aware that exposure to algorithm driven content is damaging, even if the content itself is harmless, which is not always the case. The combination of toxic content and an endlessly addictive feed of said content has already proven to be extremely destructive.
NPower is a nonprofit dedicated to making technology for social good that has two main programs: Technology Service Corps, which includes technical courses, mentoring, and career development for underserved youth and veterans. If you are able to make a contribution, click here to donate.
REFERENCES
https://www.youtube.com/@RobertMilesAI/videos
https://www.youtube.com/watch?v=ZeecOKBus3Q
https://www.youtube.com/watch?v=MEoRz58er8A – How technological innovation could amplify inequality
https://www.youtube.com/watch?v=IWvjB9U0ieo – Tasks, automation, and the rise in US wage inequality
LEARN MORE ABOUT THE SOCIAL CAUSE
Podcast:
Consequent1al- Is the presence of a human enough to regulate an AI decision-making system?
Consequent1al- The future of work
Consequent1al- Sorry, your phone says you have anxiety
Consequent1al- Staying connected