The Impact of AI on Labor and Society: Insights from Karen How
Overview
In this special report, journalist Karen How discusses her book 'Empire of AI' and the implications of artificial intelligence on the workforce and society. She highlights the need for labor assistive technologies over labor automating ones and critiques the current trajectory of AI development, emphasizing the importance of community involvement in AI governance.
Key Points
- AI's Impact on Jobs: How explains that AI is perceived as capable of replacing jobs, leading to layoffs, even if the technology itself isn't fully capable of doing so. She advocates for developing labor assistive technologies instead of labor automating ones.
- OpenAI's Mission: OpenAI was founded as a nonprofit with the mission to ensure that artificial general intelligence (AGI) benefits all of humanity. However, How critiques the organization's shift towards commercial interests and the implications of this shift, which is discussed further in the summary of OpenAI's Shift to Profit: A New Era of AI Governance and Innovation.
- The Role of Community: How emphasizes the importance of community involvement in AI development, citing examples of indigenous groups using AI to preserve their language and culture. This aligns with broader discussions on the need for community engagement in technology, as explored in Exploring AI Implementation Challenges in Libraries: Insights from a Panel Discussion.
- Global AI Landscape: The discussion touches on the competitive dynamics between the US and China in AI research, highlighting how export controls have inadvertently spurred innovation in China. This global perspective is crucial in understanding the Impact of AI on Society: Opportunities and Challenges.
- Environmental Concerns: The environmental impact of AI infrastructure, including energy consumption and resource allocation, is a significant concern, with How calling for more sustainable practices. The implications of technology on sustainability are also discussed in the context of robotics in The Future of Robotics: Innovations and Industry Insights.
- Democracy and AI: How warns that the current approach to AI development threatens democracy by undermining communities' agency and self-determination.
Conclusion
Karen How's insights provide a critical perspective on the future of AI and its societal implications, urging for a more democratic and community-focused approach to technology development.
FAQs
-
What is the main argument of Karen How's book 'Empire of AI'?
The book argues that AI development should focus on labor assistive technologies rather than labor automating ones, emphasizing community involvement in the process. -
How does AI impact jobs according to Karen How?
AI is perceived as a threat to jobs, leading to layoffs, even if the technology isn't fully capable of replacing human workers. -
What is OpenAI's mission?
OpenAI aims to ensure that artificial general intelligence benefits all of humanity, although its shift towards commercial interests has raised concerns. -
What role does community play in AI development?
Community involvement is crucial for ensuring that AI technologies serve the needs and interests of the people they affect, as demonstrated by indigenous groups using AI for language preservation. -
What are the environmental concerns related to AI?
The energy consumption and resource allocation for AI infrastructure pose significant environmental challenges that need to be addressed. -
How does the US-China dynamic affect AI research?
The US has implemented export controls to maintain its lead in AI, but these measures have led to increased innovation in China, demonstrating the complexities of global competition in AI. -
What does Karen How mean by AI threatening democracy?
How argues that the current trajectory of AI development undermines communities' agency and self-determination, which are essential for a functioning democracy.
This is democracyow democracynow.org the Warren Peace Report. I'm Amy Goodman. In this holiday special, we continue with
the journalist Karen How, author of the new book Empire of AI: Dreams and Nightmares in Sam Alman's Open AI. She
came into our studio in May. She talked about how AI will impact workers. One of the things that we have seen is this
technology is already having a huge impact on jobs. Not necessarily because the technology
itself is really capable of replacing jobs, but it is perceived as capable enough that executives are laying off
workers. And we need more some kind of more guard rails to actually prevent these companies from continuing to try
and develop labor automating technologies and try to shift them to producing labor assistive technologies.
What do you mean? So, open AI, their definition of what they call artificial general intelligence is highly
autonomous systems that outperform humans in most economically valuable work. So, they explicitly state that
they are trying to automate jobs away. I mean, what are what is economically valuable work but the things that people
do to get paid. Um but there's this really great book called Power in Progress by MIT economists Jerome
Assamogu and Simon Johnson who mention that technology development all technology revolutions they take a labor
automating approach not because of inevitability but because the people at the top choose to automate those jobs
away. They choose to design the technology so that they can sell it to executives and say you can shrink your
costs by laying off all these workers and using our AI services instead. But in the past, we've seen studies that for
example suggest that if you develop an AI tool that a doctor uses rather than replacing the doctor, you will actually
get better health care for patients. you will get better cancer diagnoses. If you develop an AI tool that teachers can use
rather than just an AI tutor that replaces the teacher, your kids will get better educational outcomes. And so
that's what I mean by labor assistive than labor and explain uh what you mean because I think a lot of people don't
even understand artificial intelligence. And when you say replace the doctor, what are you talking about? Right. So
these companies they try to develop a technology that they position as an everything machine that can do anything.
Um and so they will try to say you can use this you can talk to chatbt for therapy. No, you cannot. Chat GBT is not
a licensed therapist. And in fact, these models actually spew lots of medical misinformation. And there have been lots
of um examples of actually users being psychologically harmed by the model because the model will continue to
reinforce um selfharming behaviors. And we've even had cases where uh children who speak to chatbots and develop huge
emotional relationships with these chatbots have actually killed themselves after using these chatbot systems. Um
but that's what I mean when these companies are trying to develop labor automating tools. They're positioning it
as you can now hire this tool instead of hire a worker. So, you've talked about Sam Alman, and in part one, we touched
on uh who he is, but I'd like you to go more deeply into what uh who Sam Alman is, how he exploded onto the um US scene
testifying before Congress, actually warning about the dangers of AI. So, that really protected him in a way. Um
people seeing him as a prophet. That's a P O P. But now we can talk about the other kind of profit pro fit um and how
open AI was formed. How is open AI different from AI? OpenAI is a com I mean it was originally
founded as a nonprofit as I mentioned and Alman specifically when he was thinking about how do I make a
fundamental AI research lab that is going to make a big splash he chose to make it a nonprofit because he
identified that if he could not compete on capital uh and he was relatively late to the game Google already had a
monopoly on a lot of top AI research talent at the time if he could not compete on capital and he could not
compete um in in terms of being a first mover he needed some other kind of ingredient there to really recruit
talent recruit um public goodwill and establish a name for open AI so he identified a mission he identified let
me make this a nonprofit and let me give it a really compelling mission so the mission of openai is to ensure
artificial general intelligence benefits all of humanity And one of the quotes that I open my
book with is this quote that Sam Alman cited himself in 2013 um in his blog. He was an avid blogger back in the day
talking about his learnings on business and strategy and Silicon Valley startup life. And the quote is successful people
build companies, more successful people build countries. The most successful people build religions. And then he
reflects on that quote in his blog saying, "It appears to me that the best way to build a religion is actually to
build a company." And so talk about how Alman was then forced out of the company and then came back. And also I just
found it so fascinating that you were able to speak with so many Open AI workers. You thought there was a kind of
total ban on you. Yes. Yeah. Exactly. So I was the first journalist to profile OpenAI. Um, I embedded within the
company for three days in 2019 and then my profile published in 2020 for MIT Technology Review. And at the time I
identified in the profile this tension that I was seeing where it was a nonprofit by name, but behind the scenes
a lot of the public values that they exposed were actually the opposite of how they operated. So they espoused
transparency, but they were highly secretive. They espoused collaboriveness. They were highly
competitive. And they espoused that they had no commercial intent. But in fact, it seemed like they had just gotten a $1
billion investment from Microsoft. It seems like they were rapidly going to develop commercial intent. And so I
wrote that into the profile and OpenAI was deeply unhappy about it and they would not refuse to talk to me for 3
years. And so when OpenAI took up this mission of artificial general intelligence, they were able to
essentially shape and mold what they wanted this technology to be based on what is most convenient for them. But
when they identified it, it was at a time when scientists really looked down on this term even AGI. And so they
absorbed just a small group of self-identified AGI believers. This is why I call it quasi religious because
there's no scientific evidence that we can actually develop AGI. The people who are strongly con have this strong
conviction that they will do it and that it's going to happen soon. It is just purely based on belief and they talk
about it as a belief too. But there are two factions within this belief system of the AGI religion. There are people
who think AGI is going to bring us to utopia and there are people who think AGI is going to destroy all of humanity.
Both of them believe that it is possible. It's coming soon. And therefore they conclude that they need
to be the ones to control the technology and not democratize it. And this is ultimately what leads to your question
of what happened when Sam Alman was fired and rehired. Through the history of OpenAI, there's been a lot of
clashing between the boomers and doomers about who should actually the boomers and doomers. The boomers and the
doomers. Those that say it'll bring us the apocalyps
and those that say it'll destroy humanity. The doomers. and they have clashed relentlessly and aggressively
about how quickly to build the technology, how quickly to release the technology. And I want to take this up
until today to um in January, the Trump administration announcing the Stargate project, a $500 billion project to boost
AI infrastructure in the United States. This is Open AI Sam Alman speaking alongside President Trump.
I think this will be the most important project of this era and as Masa said for AGI to get built here to create hundreds
of thousands of jobs to create a new industry centered here. Uh we wouldn't be able to do this without you Mr.
President. He also there referred to AGI um uh artificial general intelligence. Explain what happened here and what this
is and has it actually happened. So Alman before Trump was elected um he already was sensing through
observation that it was possible that the administration would shift and that he would need to start politicking quite
heavily to ingratiate himself to a new administration. Alman is very strategic. Um he was under
a lot of pressure at the time as well because his original co-founder Elon Musk now has great beef with him. Uh
Musk feels like Altman used his name and his money to set up OpenAI and then he got nothing in return. So Musk had been
suing him, still suing him and suddenly became first buddy of the Trump administration. So Altman basically
cleverly orchestrated a um this announcement where by the way the the announcement is quite strange
because the Trump President Trump is not it's not the US government giving $500 billion. It's private investment coming
into the US um from places like Soft Bank which is uh which is one of the largest investment funds um run by Masay
Yoshi's son a Japanese businessman who made a lot of his wealth from the previous tech era. So, so it's not even
the US government that's that's providing this money. And take that right through to now that Gulf trip that
um Elon Musk was on, but so was Sam Alman to the fury of Elon Musk and then a deal was sealed in Abu Dhabi. Yeah. It
didn't include Elon Musk but was about open AI. Exactly. So Altman has continued to try and use the US
government as a way to to get access to more places and uh more powerful spaces to build out this empire. And one of the
one of the things because OpenAI's computational infrastructure needs are so aggressive. You know, I had an OpenAI
employee tell me we're running out of land and power. So they are running out of resources in the US which is why
they're trying to get access to land and energy in other places. The Middle East has a lot of land and has a lot of
energy and they're willing to strike deals. And that is why Altman was part of that trip looking to strike a deal.
And what they the deal that they struck was to build a massive data center or multiple data centers in the Middle East
using their land and their energy. But one of the things that OpenAI has recently rolled out, they call it the
OpenAI for countries program and it is this idea that they want to install OpenAI hardware and software in places
around the world and explicitly says we want to build democratic AI rails. We want to install our hardware and
software as a foundation of democratic AI globally so that we can stop China from installing authoritarian AI
globally. But the thing that he does not acknowledge is that there is nothing democratic about what he's doing. You
know, the Atlantic executive editor says we need to call these companies for what they are. They are techno
authoritarians. They do not ask the public for any perspective on how they develop the technology, what data they
train the technology on, where they develop these data centers. In fact, these data centers are often developed
in the cover of night um under shell companies like Meta recently entered New Mexico under the Shell company named
Greater Kudu LLC. Greater Kudu. Greater Kudu LLC. And once the deal was actually closed and the residents couldn't do
anything about anymore, that's when it was revealed, surprise, we're meta and you're going to get a data center that
drinks all of your fresh water. And then there was this whole controversy in Memphis around a data center. Yes. So
that is the data center that Elon Musk is building. So meanwhile, Musk is saying Alman is terrible. Everyone
should use my AI. And of course, his AI is also being developed using the same environmental and public health costs.
So he built this massive supercomputer called Colossus in Memphis, Tennessee that's training Grock, the chatbot that
people can access through X and that is being powered by around 35 unlicensed methane gas
turbines that are pumping thousands of tons of toxic air pollutants into the greater Memphis community. And that
community has long suffered a lack of access to clean air, a fundamental human right. So I want to go to interestingly
Sam Alman testifying in front of Congress about solutions to the high energy consumption of artificial
intelligence. In the short term, I think this probably looks like more natural gas. Um although
there are some applications where I think solar can really help. In the medium term, I hope it's advanced
nuclear uh fish and fusion. More energy is important well beyond AI. So that's open AI's Sam Alman. This is testifying
before the Senate and talking about everything from uh solar to nuclear power. Something that was fought in the
United States by environmental activists for decades. So you have these huge old uh nuclear power plants, but many say
you can't make them safe no matter how small. and smart you make them. This is one of the things of the many things
that I'm concerned about with the current trajectory of AI development. This is a second order tertiary order
effect is that because these companies are trying to claim that the AI development approach they took doesn't
have climate harms. They are explicitly evoking nuclear again and again and again as nuclear will solve the problem.
And it has been effective. I have talked with certain AI researchers who thought the problem was solved because of
nuclear and in order to try and actually build more and more nuclear plants, they are lobbying governments to try and
unwind the regulatory structure around nuclear power plant building. I mean this is this is like crazy on so many
levels that they're not just trying to develop these the AI technology recklessly. They are also trying to lay
down infrastructure and nuclear infrastructure in this move fast break things ideology. But for those who um
are environmentalists and have long opposed nuclear will they be sucked in by the solar alternative? But that exact
so data centers have to run 247. So they cannot actually run on just renewables. That is why the companies keep trying to
evoke nuclear as the solv. But solar does not actually work when we do not have sufficient enough energy storage
solutions for that 247 operation. Um we're talking to Karen how author of Empire of AI dreams and nightmares and
Sam Alman's Open AI. You mentioned earlier China. Uh you live in Hong Kong. Uh you've covered Chinese AI, US AI for
years. Um explain what's happening in China right now. Yeah. So the U I have to sort of explain the dynamic between
China and the US first. So the US China and the US are the largest hubs for AI research. They are the largest
concentration of AI research talent globally. Um, China other than Silicon Valley, China really is the only other
rival in terms of talent density and the amount of capital investment and the amount of infrastructure that is going
into AI development. In the last few years, what we have seen is the US government has been aggressively trying
to stay number one and one of the mechanisms that they have used is export controls. A key input into these AI
models is the computational infrastructure and the computer chips for installing into the data centers for
training these models. And these computer chips are the in order to develop the AI models companies are
using the most bleeding edge computer chip technology. It's like the every two years a new chip comes out and they
immediately start using that to train the next generation of AI models. Those computer chips are designed by American
companies, the most prominent one being Nvidia in California. And so the US government has been trying to use export
controls to prevent Chinese companies from getting access to the most cutting edge computer chips. That has all been
under the recommendation of Silicon Valley saying this is the way to prevent China from being number one. and like
put export controls on them and don't regulate us at all so we can stay number one and they will fall behind. What has
happened instead is because there is a strong base of talent of AI research talent in China
under the constraints of fewer computational resources, Chinese companies have actually been able to
innovate and develop the same level of AI model capabilities as American companies with two orders of magnitude
less computational resources, less energy, less data. So, I'm talking specifically about um the Chinese
company Highflyier, which developed this model called Deep Seek earlier this year that briefly tanked the global economy
because the company said that their their um training this one AI model cost around $6 million when OpenAI was
training models that cost hundreds of millions if not over tens of billions of dollars. And that delta demonstrated to
people that this what Silicon Valley has tried to convince everyone for the last few years that this is the only path to
getting more AI capabilities is totally false and actually the techniques that chi the Chinese company was using were
ones that existed in the literature and just had to be assembled. They used a lot of engineering sophistication to do
that, but they weren't actually using fundamentally new techniques. They were ones that actually already existed. So,
let me ask you something, Karen. Uh, the latest news, um, as you're traveling in the United States before you go back to
Hong Kong, uh, of Trump's attack on academia, how this fits in. Um, how could Trump's attack on international
students, specifically targeting the what, more than 250,000, a quarter of a million Chinese students and revoking
their visas, impact the future of the AI industry, but not just Chinese students, because what's going on here now is
terrifying students around the world. And because labs are shutting down in all kinds of ways here, uh, US students
as well, uh, deciding to go abroad. This is just the latest action that the US government has taken over the last
few years to really alienate a key talent pool for US innovation. Originally,
there were more Chinese researchers working in the US contributing to US AI than there were in China because just a
few years ago, Chinese researchers aspired to work for American companies. They wanted to move to the US. They
wanted to contribute to the US economy. They didn't want to go back to their home country. But because of what was
called the China Initiative, which was the a first Trump era initiative to try and criminalize Chinese academics or
ethnically Chinese academics, some of whom were actually Americans um based on just paperwork errors. They would accuse
them of being spies. That was one of the first actions. Then of course the pandemic happened and the USChina trade
escalations started amplifying anti-Chinese rhetoric. All of these led and now with the potential ban on
international students, all of these have led more and more Chinese researchers to just opt for staying at
home and contributing to the Chinese AI ecosystem. And this was a prerequisite to High-fly pulling off Deepseek. If
there had not been that concentration and buildup of AI talent in China, they probably would have had a much harder
time innovating around circumventing these export controls that the US government was imposing on them. But
because they now have a high concentration of top talent, some of the top talent globally,
when those restrictions were imposed, they were able to innovate around them. So Deepseek is literally a product of
this continuation of that alienation and with the US continuing to take this stance, it is just going to get worse.
And as you mentioned, it's not just Chinese researchers. I literally just talked to a friend in academia that said
she's considering going to Europe now because she just cannot survive without that public funding. And Europe European
countries are seeing a critical opportunity offering milliondoll packages. Come here, we'll give you a
lab. We'll give you millions of dollars of funding. I mean this is the fastest way to brain drain this country. I mean
what many are saying US's brain drain is their brain gain. Yes. And this also reminds us of history. You have the
Chinese rocket scientist Chen Shuen who in the 1950s was inexplicably held under house arrest for years and then
Eisenhower has him deported to China. He becomes the father of rocket science and uh China's entry into space. And he said
he would never again step foot into the United States even though originally that was the only place he wanted to
live. Yes. And there was a I believe a government official a US government official who said that was the dumbest
mistake the US ever made. Um you we talk about the brain drain and the brain gain. Okay. Again uh some more
rhyming. The doomers and the boomers. Um, I want to talk about what an AI apocalypse looks like, meaning how it
brings us to apocalypse, but also um how uh people say it could lead us to a utopia. What are the two tracks
trajectories? It's a great question and I ask boomers and doomers this all the time. Can you
articulate to me exactly how we get there? And the issue is that they cannot. And this is why I call it quasi
religious. It really is based on belief. I mean, I was talking with one researcher who identified as a boomer
and I said, you know, he his his eyes were wide and he he really lit up saying, you know, once we get to AGI,
game over, everything becomes perfect. And I asked him, I was like, can you explain to me how does AGI feed people
that haven't don't have food on the table right now? And he was like, "Oh, you're talking about like the floor
floor and how to elevate their quality of life." And I was like, "Yes, because
they are also part of all of humanity." And he was like, "I'm not really sure how that would happen, but I think it
could it could help the middle class get more economic opportunity." And I was like, "Okay, but how does that happen as
well?" And he was like, "Well, once these once we have AGI and it can just create trillions of dollars of economic
value, we can just give them cash payouts." And I was like, "Who's giving them cash payouts? What institutions are
giving them?" You know, like it it doesn't when you actually test their logic, it doesn't really hold. And with
the doomers, I mean, it's the same thing like their belief is ultimately
what I realized when reporting on the book is they believe AGI is possible because of their belief of how the human
brain works. They believe human intelligence is inherently fully computational. So if you have enough
data and you have enough computational resources, you will inevitably be able to recreate human intelligence. It's
just a matter of time. And to them, the reason why there would that would lead to an apocalyptic scenario is humans, we
learn and improve our intelligence through communication. And communication is inefficient. We miscommunicate all
the time. And so for AI intelligences, they would be able to rapidly get smarter and smarter and smarter by
having perfect communication with one another as digital intelligences. And so many of these people who self-identify
as dreamers say there has never been in the history of the the universe a species that was superior to another
spec a a species that was able to rule over um a more superior species. So they think that ultimately AI will evolve
into a higher species and then start ruling us and then maybe decide to get rid of us altogether. As we begin to
wrap up, I'm wondering if you can talk about any model of a country, not a company, that is pioneering a way of
democratically controlled artificial intelligence. I don't think it's actively happening
right now. The EU has had the EU AI act which is their major piece of legislation trying
to develop a riskbased rightsbased framework for governing AI um deployment. But
to me, one of the keys of democratic AI governance is also democratically developing AI. And I don't think any
country is really doing that. And what I mean by that is there are AI has a supply chain. It needs data. It needs
land. It needs energy. It needs water. And it also needs spaces in which these companies need access to to then deploy
their technology. Schools, hospitals, government agencies. Silicon Valley has done a really good job over the last
decade of making people feel that their collectively owned resources are Silicon Valleys. You know, I have I talk with
friends all the time who say we don't have data privacy anymore. So like what's more what's what is more data to
these companies? Like I'm fine just giving them all of my data. But that data is yours. You know that
intellectual property is the writers and artists intellectual property. That land is a community's land. Those schools are
the students and teachers schools. The hospitals are the doctors and nurses and patients hospitals. These are all sites
of democratic contestation in the deployment in the development and the deployment of AI. And just like those
Chilean water activists that we talked about who aggressively understood that that fresh water was theirs and they
were not willing to give it up unless they got some kind of mutually beneficial agreement for it. We need to
have that spirit in protecting our data, our land, our water, and our schools so that companies inevitably
will have to adjust their approach because they will no longer get access to the resources they need or the spaces
that they need to deploy in. In 2022, Karen, you wrote a piece for MIT Technology Review headlined a new vision
of artificial intelligence for the people. In a remote rural town in New Zealand, an indigenous couple is
challenging what AI could be and who it should serve. Who are they? This was a wonderful story that I did where the
couple um they run Tahiku Media. It's a nonprofit MAI radio station in New Zealand. And the Mai people have
suffered a lot of the same um challenges as many indigenous peoples around the world. The history of colonization led
them to rapidly lose their language and there are very few Maui speakers in the world anymore. And so in the last few
years there's been an attempt to revive the language and the New Zealand government has tried to repent by by
trying to encourage that the revival of that language. But this nonprofit radio station, they had all of this wonderful
archival material, archival audio of their ancestors speaking the Mai language that they wanted to provide to
Maui speakers, ma Maui learners around the world as an educational resource. The problem is in order to do that, they
needed to transcribe the audio so that Mai learners could actually listen, see what was being said, click on the words,
understand the translation, and actually turn it into an active learning tool. But there were so few mauder speakers
that can speak at that advanced level that they realized they had to turn to AI. And this is a key part of my book's
argument is I'm not critiquing all AI development. I'm specifically critiquing the scale at all cost approach that
Silicon Valley has taken. But there are many different kinds of beneficial AI models, including what they ended up
doing. So they took a fundamentally different approach. First and foremost, they asked their community, do we want
this AI tool? Once the community said yes, then they moved to the next step of asking people to fully consent to
donating data for the training of this tool. They explained to the community what this data was for, how it would be
used, how they would then guard that data and make sure that it wasn't used for other purposes.
They collected around a couple hundred hours of audio data in just a few days because the community rallied support
around this project. and only a couple hundred hours was enough to create a performant speech recognition model
which is crazy when you think about the scales of data that these Silicon Valley companies require. And that is once
again a lesson that can be learned is actually there's plenty of research that shows when you have highly curated small
data sets, you can actually create very powerful AI models. And then once they had that tool, they were able to do
exactly what they wanted to open source and resour uh o open source this educational resource to their community.
And so my vision for AI development in the future is to have more small taskspecific AI models that are not
trained on vast polluted data sets but small curated data sets and therefore only need small amounts of computational
power and can be deployed in challenges that we actually need to tackle for humanity.
mitigating climate change by integrating more renewable energy into the grid, improving health care, by doing more
drug discovery. So, as we finally do wrap up, what you were most shocked by, you've been doing uh this journalism,
this research for years, what you were most shocked by in writing Empire of AI. I originally thought that I was going to
write a book focused on vertical harms of the AI supply chain. Here's how labor exploitation happens in the AI industry.
Here's how the environmental harms are arising out of the AI industry. And at the end of my reporting, I realized that
there's a horizontal harm that's happening here. Every single community that I spoke to, whether it was artists
having their intellectual property taken or Chilean water water activists having their fresh water taken, they all said
that when they encountered the empire, they initially felt exactly the same way. A complete loss of agency to
self-determine their future. And that is when I realized the horizontal harm here is AI is threatening democracy. If the
majority of the world is going to feel this loss of agency over self-determining their future, democracy
cannot survive and again specifically Silicon Valley's approach scale at all costs AI development. But you also
chronicle the resistance. You talk about how the Chilean water actors felt at first, how the artists feel at first. So
talk about the strategies that these people have employed and if they've been effective. So the amazing thing is that
there has since been so much push back. The artists have then said, "Wait a minute, we can sue these companies." The
Chilean water activist said, "Wait a minute. We can fight back and protect these water resources." The Kenyon
workers that I spoke to who are contracted by OpenAI, they said, "We can unionize and escalate our story to
international media attention." And so even in these even when I thought that these communities you could argue are
the most vulnerable in the world have the least amount of agency, they were the ones that remembered that they do
have agency and that they can seize that agency and fight back. And I think it it was it was remarkably heartening to
encounter those people to remind me that actually the first step to reclaiming democracy is remembering that no one can
take your agency away. Karen How, author of the new book Empire of AI: Dreams and Nightmares and Sam Alman's Open AI. Go
to democracynow.org to see the full interview. And that does it for this special broadcast. I'm Amy
Goodman. Thanks so much for joining us. Thanks for watching Democracy Now on YouTube. Subscribe to the channel and
turn on notifications to make sure you never miss a video. And for more of our audienceup supported journalism, go to
democracynow.org where you can download our news app, sign up for our newsletter, subscribe to
the daily podcast, and so much more.
Heads up!
This summary and transcript were automatically generated using AI with the Free YouTube Transcript Summary Tool by LunaNotes.
Generate a summary for freeRelated Summaries

The Impact of AI on Society: Opportunities and Challenges
This video features a discussion among experts on the transformative effects of AI on employment, creativity, and societal structures. They explore both the potential benefits and the risks associated with AI, including job displacement, ethical concerns, and the future of human agency.

Exploring AI Implementation Challenges in Libraries: Insights from a Panel Discussion
This panel discussion delves into the challenges faced by libraries in implementing AI initiatives, highlighting experiences from various institutions. Panelists share insights on funding, governance, and the impact of AI on library staff roles, while emphasizing the importance of collaboration and ethical considerations in AI adoption.

OpenAI's Shift to Profit: A New Era of AI Governance and Innovation
Exploring OpenAI's transition from nonprofit to for-profit structure and its implications for the future of AI.

The Godfather of AI: Jeffrey Hinton on Career Prospects and AI Risks
In this insightful conversation, Jeffrey Hinton, known as the 'Godfather of AI', discusses the future of artificial intelligence, its potential dangers, and the implications for career prospects in a world increasingly dominated by superintelligence. He emphasizes the importance of AI safety and the need for regulations to mitigate risks.

The Revolutionary Impact of Claude AI: A Game-Changer for Software Engineering
Explore how Claude AI surpasses GPT-4 and revolutionary features that redefine productivity.
Most Viewed Summaries

Mastering Inpainting with Stable Diffusion: Fix Mistakes and Enhance Your Images
Learn to fix mistakes and enhance images with Stable Diffusion's inpainting features effectively.

A Comprehensive Guide to Using Stable Diffusion Forge UI
Explore the Stable Diffusion Forge UI, customizable settings, models, and more to enhance your image generation experience.

How to Use ChatGPT to Summarize YouTube Videos Efficiently
Learn how to summarize YouTube videos with ChatGPT in just a few simple steps.

Ultimate Guide to Installing Forge UI and Flowing with Flux Models
Learn how to install Forge UI and explore various Flux models efficiently in this detailed guide.

How to Install and Configure Forge: A New Stable Diffusion Web UI
Learn to install and configure the new Forge web UI for Stable Diffusion, with tips on models and settings.