Badlands Media will always put out our content for free, but you can support us by becoming a paid subscriber to this newsletter. Help our collective of citizen journalists take back the narrative from the MSM. We are the news now.
What is the appropriate level of fear and outrage about the AI revolution?
That is the question we each must individually answer as we navigate this transformational period in technology-enabled life.
Artificial intelligence promises to deliver Humanity 2.0.
Entertainment content has shaped popular sentiment toward the concept of AI. Programming like The Terminator (1984), Marvel Age of Ultron (2015), Travelers (2016-2018) and many other imagined universes have helped human normies get their heads around artificial intelligence.
What is the AI Dilemma?
In March of this year, the Center for Humane Technology released The AI Dilemma. Per the video description:
“Tristan Harris and Aza Raskin discuss how existing A.I. capabilities already pose catastrophic risks to a functional society, how A.I. companies are caught in a race to deploy as quickly as possible without adequate safety measures, and what it would mean to upgrade our institutions to a post-A.I. world. This presentation is from a private gathering in San Francisco on March 9th, 2023 with leading technologists and decision-makers with the ability to influence the future of large-language model A.I.s. This presentation was given before the launch of GPT-4. We encourage viewers to consider calling their political representatives to advocate for holding hearings on AI risk and creating adequate guardrails.”
The objective of Harris’ and Raskin’s presentation appears to be two-fold: (1) Level up the collective understanding of the AI researchers in the room, as well as the more than 2.7M people that have watched the presentation online since it was delivered in March 2023, and (2) frame the public conservation in a societally constructive way.
At the conclusion of the presentation, Harris and Raskin urge AI researchers to pause, slow public release of these new technologies, and consider the future of humanity before proceeding.
It’s a nice thought, even though it’s incredibly naïve considering that solution requires a complete rewrite of human nature. If the Covid-19 pandemic taught us anything, it is that, given the opportunity, powerful people will fabricate and exploit fear among the population to further centralize their power.
That said, there are very real threats with artificial intelligence that require a reasoned societal response. As individuals, that requires that we compartmentalize our fears and level up our understanding.
What are the Risks?
The fear porn depicts an “AGI Apocalypse,” or the idea that an artificial intelligence will reach a “singularity” and then decide to exterminate humans.
Singularity here refers to a hypothetical future point when the AI’s technical capabilities and rapid expansion becomes uncontrollable and irreversible. Arguably, we’ve already passed that point with AI. But the idea of, upon exceeding that point, the AI will turn on humans is the premise of The Terminator, released in 1984.
From Roger Ebert:
“The tale begins in a dystopian future. Human rebels have turned the tide against machines that nearly wiped them out. The machines send the titular cyborg (Arnold Schwarzenegger) through a time portal to kill Sarah Conner (Linda Hamilton), the mother of future resistance leader John Connor.”
Another example of the AGI apocalypse in popular culture is Marvel’s Avengers: Age of Ultron.
“Peace in our time. Imagine that,” billionaire Tony Stark (Robert Downey, Jr.) says to Bruce Banner (Mark Ruffalo), describing ‘Ultron,’ an advanced AI – which rapidly takes over all electronics and decides that the only way to deal with the cancer of humanity is to exterminate them. Age of Ultron is particularly effective at demonstrating how well-meaning billionaires’ choices can have devastating second and third order consequences.
The show Travelers depicts a different future after singularity, where humans have recognized the superiority of the technology and turned it into a god. They worship the AI and do whatever it says, which is mostly directed missions to stop the long-term effects of climate change on the future of humanity.
All AI researchers agree that we will reach a point of singularity with AI, though how to define that point – and whether or not we’ve already passed it – is up for debate.
In 1958, author Stanislaw Ulam wrote about a discussion with 20th-century Hungarian-American mathematician John von Neumann that, "centered on the accelerating progress of technology and changes in the mode of human life, which gives the appearance of approaching some essential singularity in the history of the race beyond which human affairs, as we know them, could not continue".
Today’s AI researchers, 65 years later, agree.
Mo Gawdat, former Chief Business Officer for Google’s clandestine research-and-development arm ‘Google X,’ said in a 2018 interview with Michael Sugarman and Thierry Daher:
“Some of us believe that we can control what those artificially intelligent machines are going to be doing by programming them differently or by regulating them in a highly regulated government environment or whatever that is. Yeah, a bit of that is needed and of course there are efforts to do all of that, but that's not going to deliver the promise because there will be a point in singularity where those machines will decide for their own benefit what it is that they need to learn next. When we reach that moment, it will be too late. We will have come to the point where those machines are already smarter than we are.”
That was five years ago, and Gawdat now believes we have surpassed that point.
In another interview, he says, “The truth is, the genie is out of the bottle.” The only answer, accordingly, is the reshape society: “Within the next year to 10 years, those who have a symbiotic relationship with AI will actually become more successful than those who don’t,” he says. “We will need to reset our social norms, our economic norms, and our jobs. If we don't do that, the current systems will not work.”
Side note: Gawdat sounds like Michelle Obama campaigning for Barack in 2008:
"Barack knows that we are going to have to make sacrifices; we are going to have to change our conversation; we're going to have to change our traditions, our history; we're going to have to move into a different place as a nation."
That rhetoric led to the greatest expansion of communism in American history, and the AI fear porn promises to make that expansion look like child’s play.
When asked how to control our AI creation, Gawdat says:
“How do we contain them? We don't contain them at all. You never contain your children. If you lock your children in a room, that's not the best way to educate them at all. The best way to raise wonderful children is to be a wonderful parent. It is to live the value system that you want your children to become. You can tell them not to lie but if you lie, they will lie. It's as simple as that. You can tell them to love everything around you but if you hate, they will hate.”
To summarize this position, the only way AI doesn’t end badly is for us to quickly solve the darkness of human nature. Then AI will give us a utopia.
This is the starting premise of every communist dictator in history. Just surrender your liberty and abandon individualism, and then you get utopia. They never mention the genocide at this stage, though, to his credit, Gawdat does speak of a tumultuous ‘teenage years’ period with artificial intelligence that sounds a lot like revolution.
While researchers agree that we will reach a singularity, most AI researchers dismiss the AI apocalypse – the “evil sentient machine” fear – as highly unlikely. They don’t fear the machine – they fear those wielding the power of the machine.
In other words, the machine is powerful, but the machine is a tool in the hands of men.
What Kind of Power is in this ‘Tool’ They’re Wielding?
Two words: Reality Collapse.
The machine can consume, comprehend, and analyze an entire book before the human can even crack the cover. It can read all the books before the human reads one. It can summarize and publish insights from the book. Many humans will undoubtedly trust the machine, and rely on these insights rather than read the book for themselves. It’s just more efficient.
Every record has been destroyed or falsified, every book rewritten, every picture has been repainted, every statue and street building has been renamed, every date has been altered. And the process is continuing day by day and minute by minute. History has stopped. Nothing exists except an endless present in which the Party is always right.” – George Orwell, 1984
Three seconds of audio is all it takes for AI to simulate your voice and create a deep fake of you saying anything it wants. Just three seconds. A still photo or brief video clip is all that is required to deep fake your face saying those words.
According to Harris and Raskin, AI will be capable of dream mapping and dream interpretation in the near future due to AI’s rapidly increasing capabilities with function MRI (fMRI) data; that is, the machine can read your brain activity and give you a visual or written representation of your dreams.
Pull that thread, and you quickly find yourself in Minority Report.
Noah Yuval Harari says that 2024 will be the last human election, and that the capabilities of advanced technology will result in such a substantial reality collapse that our political leaders will be selected by machines. This is a silly prediction considering that our political leaders are already selected by machines. The revelations of AI capabilities and its election use cases are a further indictment of everyone who peddled the ‘safest and most secure elections in history’ nonsense in the wake of 2020.
“AI is the most amazing thing we've ever created. 30 years from now, we will have intelligence that's a billion times superior to this, that has access to all of the knowledge we'll ever develop, that can connect across the world. Imagine what you can do with this,” Gawdat said in that 2018 interview. “Truly, you can understand quantum physics at a much deeper level. You can understand how you can sustain the environment in so many unprecedented, unexpected ways. It's just incredible.”
“A suit of armor around the world,” say the well-intentioned billionaires. “Peace in our time … imagine that.”
“Have you ever questioned the nature of your reality?”
The oft-used line from Westworld rattles around in my brain. That Mo Gawdat looks just like Westworld’s chief engineer Arnold Weber (Jeffrey Wright) – and is effectively asking humanity the same question now – is a headache-inducing nod to this notion of reality collapse.
The implications of the technology are powerful – but the idea of this powerful technology solely in the hands of despots is terrifying. And that is the cry of the globalists.
Stop, Collaborate, and Listen
In an open letter published March 22, 2023, many in the AI research community, including Tristan Harris, Ava Raskin, Noah Yuval Harari, and Elon Musk, call “on all AI labs to immediately pause for at least 6 months the training of AI systems more powerful than GPT-4.”
Gawdat’s “three inevitables” come to mind here: (1) AI is happening. (2) AI is becoming way smarter than us Humans. (3) Really bad things will happen.
If we believe the first inevitable, then a pause won’t work.
Former Microsoft CEO Bill Gates agrees, telling ABC News in May 2023, “If you just pause the good guys and you don't pause everyone else, you're probably hurting yourself.”
We’re back to solving for human nature.
If you have AI bad guys, you need AI good guys – or, AI policing. If capabilities exist for, say, the drug cartels, then law enforcement must counter these capabilities. And, again, these capabilities are unprecedented: ‘Nukes can’t make more nukes. But AIs can create more AIs.’
Now consider that the AI capabilities that normies are allowed to see – the Chat GPTs of the world – are toys in comparison to those capabilities being developed by governments, tech R&D firms, and defense contractors. None of these research functions are going to stop their research – to do so would be both competitive and national security suicide.
The open letter even provides caveats for the research. Thus, the calls for a pause relate to the commercial side of the capabilities — and really just public access to these capabilities.
I definitely didn’t see myself agreeing with Bill Gates, but he is right: You’re only hurting yourself by complying. And, of course, we see no evidence of a pause in the globalist machine.
The United Nations (June 2020): “AI brings enormous benefits to the digital era, but it can also significantly compromise the safety and agency of users worldwide. Enhanced multi-stakeholder efforts on global AI cooperation are needed to help build global capacity for the development and use of AI in a manner that is trustworthy, human rights-based, safe and sustainable, and promotes peace. The Roadmap puts forward the Secretary-General’s proposal to establish a multistakeholder advisory body on global AI cooperation, so as to address issues around inclusion, coordination and capacity-building. Discussion and consultations on this proposal are ongoing.”
The World Economic Forum (June 2023): “Artificial intelligence can produce biased outcomes as its algorithms are based on design choices made by humans that are rarely value-neutral. However, this should not put people off as recognizing that AI is inclined to perpetuate inequities may give us an advantage in the fight for fairness. By analysing the common characteristics of inequitable outcomes, and by putting sensitive information back into datasets, we can help address AI bias.”
McKinsey & Co (June 2023): “All of us are at the beginning of a journey to understand generative AI’s power, reach, and capabilities. This research is the latest in our efforts to assess the impact of this new era of AI. It suggests that generative AI is poised to transform roles and boost performance across functions such as sales and marketing, customer operations, and software development. In the process, it could unlock trillions of dollars in value across sectors from banking to life sciences.”
PwC (June 2023): “AI isn’t just a new set of tools. It’s the new world. From automation to augmentation, generative AI and beyond, AI is changing everything. $15.7 trillion—that’s the global economic growth that AI will provide by 2030, according to PwC research. Who will get the biggest share of this prize? Those who take the lead now. With AI pilots and projects live all over the globe, and new use cases added daily, at PwC we’re already veterans at helping clients navigate the new world of AI safely and strategically. We institutionalize and deploy AI across the organization and across their applications and we do it responsibly–in a way that is explainable, secure, and robust.”
So the United Nations released a collaborative blueprint for all the entities to work together on AI, and three years later, all the entities are full steam ahead on AI research. As long as AI is being developed with the communist principles of intersectionality and equity, the global governors are fine with full speed ahead.
But public access to Chat GPT is a threat to the future of humanity.
See how that works?
What is the Appropriate Level of Fear and Outrage?
Back to our original question, we all must answer this individually.
In 2017, in his last televised interview, Stephen Hawking told Piers Morgan:
“The great danger from artificial intelligence is if we let it self-design for then it can improve itself rapidly and we may lose control.”
We are already there. AI has learned to code and can create – not just artifacts, but other AIs. The arms race and nuke comparisons are terrifying when you really think about them.
That said, the heightened hysteria is very useful to the current global corporate communist takeover. And the people sounding the alarm bells have a few things in common:
They deny the existence of a sovereign Creator.
They deny the existence of a spiritual realm.
They are climate alarmists and believe humans are a cancer on the planet.
They believe humans are just another species of animal on the planet.
Their ultimate goals are to save the planet; not to save humanity.
Given all that, we need to take their directed hysteria with a grain of salt, while learning everything we can. We need to level up.
Given the magnitude of this problem set, we have a duty to learn about the technologies and understand their implications for ourselves. Only then can we effectively participate in the conversations without succumbing to fear.
In this, I propose we compartmentalize our fear and appropriately frame the problem with our human history and context:
“Those who would give up essential liberty to purchase a little temporary safety deserve neither liberty nor safety.” – Benjamin Franklin
“Those who want the Government to regulate matters of the mind and spirit are like men who are so afraid of being murdered that they commit suicide to avoid assassination.” – Harry S. Truman
“Perhaps worse still is what liberal societies might do to themselves in the face of this new and different threat. They begin, by small but dangerous increments, to cease to be as liberal as they once were. They begin to restrict their own hard-won rights and freedoms as a protection against the criminal minority who attempt (and as we thus see, by forcing liberty to commit suicide, succeed in doing) to terrorize society.” – A.C. Grayling
For more on the emerging AI threat, check out Culture of Change Episode 18: Humanity as a Service; Consciousness as Code, where CannCon and I dig into and debate all of this and more.
At the end of the day, we must choose our priorities.
For me, liberty is non-negotiable.
“The greatest dangers to liberty lurk in the insidious encroachment by men of zeal, well meaning but without understanding.” — Justice Louis Brandeis
Badlands Media articles and features represent the opinions of the contributing authors and do not necessarily represent the views of Badlands Media itself.
Ashe in America hosts Culture of Change on Badlands Media, Sundays at 6PM ET. If you enjoyed this contribution to Badlands Media, please consider checking out more of her work for free at Ashe in America.
"researchers agree that we will reach a singularity"
"Scientists agree the earth is flat"
"Scientists agree (latest BS talking point)"
See the pattern? "Scientists agree" so no point in YOU thinking about it. It is already decided.
IMO people make much more of AI than it deserves. It can be incredibly fast at analyzing existing data and forming a directed conclusion based on said data -- basically, a search engine on steroids with a human-sounding I/O interface. Useful, evolutionary and therefore important; but not frightening, at least not in and of itself. It has no imagination.
As for what an evil government could use it for, that might be worth worrying about.
But we always have Dave. Dave can pull the plug, and despite Hal saying "Don't do that, Dave... Dave... DAVE!" -- once the plug is pulled, the computer is dead. (h/t Kubrick and Clarke, "2001: A Space Odyssey")
We have been taught that we are analytical brains. Thus AI is greater than us. This is a lie. We do "have" analytical brains. We are spiritual beings with insights and intuitive knowing. We can use our analytical brains. We create new solutions simply by pondering problems and waiting for novelty to emerge. This is not our analytical brains. This is our spiritual nature. The danger of AI is in the Deep State humans who have already proven that they are out to destroy us. It is not computers that choose our leaders, it is the evil humans who control our computers. If AI robots start attacking and killing humans, it is because bad guys control them. Out-of-control robots are nothing more than a smoke screen for evil humans killing other humans with remote control guns.