Over the last 20 years, technology has widely become embedded in virtually every aspect of our daily lives — work, entertainment, fitness, meals, socializing, dating, creating, shopping, banking, finance, and more. It has even snuck into our relationship with ourselves. We use apps to sleep, think, and feel better.
Technology is everywhere, and it's not stopping.
You have probably noticed the recent explosion of artificial intelligence apps for writing and art. [The illustrations in this piece were created on jasper.ai] Soon, everyone will have access to AI that can create art, write essays, advertising copy, web content, social media posts, TED Talks, and much more for little to no out-of-pocket costs. And we can only imagine the plethora of other AI-driven apps currently in the works.
It all seems super exciting, and it is. Technology is powerful, fun, and even magical.
But as we depend more and more on technology, how will it change us?
In the interaction between humans and technology, who is adapting to whom?
Is the technology being built for humans, or do we have to change ourselves to use the tech?
As time passes, will we become more like robots or the AI models we use?
Over the next 30 years, as we increasingly interact with technology, who or what will we become?
In Human-Focused Turing Tests: A Framework for Judging Nudging and Techno-Social Engineering of Human Beings, Brett M. Frischmann, Professor of Law at Yeshiva University, discusses how technology shapes humans:
Humans have been shaped by technology since the dawn of time, and of course, humans have shaped other humans through technology for a very long time as well. Many people have written on this topic. Even the topic of shaping humans to be machines is not new; it has garnered significant attention in the context of the workplace and mass media such as radio and television.
Looking at the present and to the near future, one thing seems clear: interconnected sensor networks, the Internet of Things, and (big) data-enabled automation of systems around, about, on, and in human beings promise to expand the scale and scope significantly. It is the fine-grained, hyper-personalized, ubiquitous, continuous and environmental aspects of the techno-social engineering that make the scale and scope unprecedented.
Frischmann devised a Human-Focused Turing Test to assess the impact of technology on humans.
The original Turing Test, devised by computer scientist Alan Turing in 1950, is a test of a machine's ability to exhibit intelligent behavior indistinguishable from that of a human.
It has become an important method to evaluate artificial intelligence, with regular Turing Test competitions to determine the extent of robots’ growing ability to mimic human behavior.
Frischmann re-purposed this test to ascertain the degree to which humans are becoming more like machines. He presents it as a tool "for identifying and evaluating our humanity and evolving relationships with technology."
The question we need to consider is not whether machines are becoming more human but whether humans are becoming more like machines.
Is all this technology dehumanizing us?
Are we slowly being re-fashioned into machines of some sort?
Will there come a point where we cannot distinguish our minds from the machines around us?
In a Pew Research study, Artificial Intelligence and the Future of Humans, some 979 technology pioneers, innovators, developers, business and policy leaders, researchers, and activists were surveyed.
They noted that networked artificial intelligence would amplify human effectiveness but also threaten human autonomy, agency, and capabilities. And they expressed concerns about the long-term impact of these new tools on the essential elements of being human.
Their areas of concern included human agency, data abuse, and dependence lock-in.
Human agency:
Decision-making on critical aspects of digital life is automatically ceded to code-driven, "black box" tools. People lack input and do not learn the context of how the tools work. They sacrifice independence, privacy, and power over choice; they have no control over these processes.
Data abuse:
Data use and surveillance in complex systems are designed for profit or for exercising power. Most AI tools are and will be in the hands of companies striving for profits or governments striving for power. Values and ethics are often not baked into the digital systems making people's decisions for them.
Dependence lock-in:
Many see AI as augmenting human capacities, but some predict the opposite - that people's deepening dependence on machine-driven networks will erode their abilities to think for themselves, take action independent of automated systems and interact effectively with others.
The experts laid out a variety of perspectives and solutions to the challenges AI would present in the future.
Everyone agreed that sooner rather than later, we will all live alongside powerful, intelligent machines that can transform who we are.
In a world where artificial intelligence drives our systems, we face a clear and present danger.
We can be changed in ways that are unhealthy for us.
We can be “programmed” to act in ways that do not serve our best interests.
If we are not prepared, technology can and will be used to take control of our minds.
What can we do?
How do we counteract this inevitable take-over of humanity by AI systems?
For Big Tech, with a business model built on tracking and controlling us, there is no incentive to prevent this, quite the contrary. We can expect their surveillance activities to expand as they integrate more sophisticated AI into their systems.
No. We need new technology. Technology that is designed to serve humans. Incorruptible technology, with safeguards against centralized power built into the code.
I am an avid proponent of web3 because it serves this purpose.
Decentralization, transparency, and human value are central to the web3 ethos. Unlike the other tech movement that dehumanizes, web3 requires us to be more human, more autonomous, and far more conscious of our relationship with technology.
One of the biggest challenges with web3 is the re-education needed to get users to value their digital property, privacy, security, wealth, and well-being.
We must wake up from the hypnotic slumber we've been seduced into by Big Tech. We must open our eyes, shake off the drowsiness, and reclaim ourselves and our lives.
For the future to stay human, we needed a whole new technology. We have it in web3.
Let’s wake up and use it!
REFERENCES:
Frischmann, Brett M., Human-Focused Turing Tests: A Framework for Judging Nudging and Techno-Social Engineering of Human Beings (September 22, 2014). Cardozo Legal Studies Research Paper No. 441, Available at SSRN: http://dx.doi.org/10.2139/ssrn.2499760
Janna Anderson and Lee Rainie, Artificial Intelligence and the Future of Humans. Pew Research Center, https://www.pewresearch.org/internet/2018/12/10/artificial-intelligence-and-the-future-of-humans/
What happened this week:
Some clear reminders from this tumultuous week in crypto:
1/ We need appropriate regulation as quickly as possible.
2/ This movement is not about quick money, it’s about building a better future.
3/ We need to think in terms of long-term progress, not short-term gain.
More on this when the dust settles, and we know the facts.
This excellent tweet from Miles Jennings sheds some light on what went down:
A podcast to check out is On the Other Side with Chase Chapman. I love Chase’s focus on the human side of web3. This is a perfect conversation for this week.
That’s all for this week, folks.
Leave a comment to let me know your thoughts and what topics you would like me to explore in future editions.
Cheers!
Misha