Overview of AI Safety Challenges
Maxim Fores emphasizes that the situation around advanced AI is not simply a race between labs or countries, but a critical contest between those building potentially dangerous AI systems and those trying to prevent catastrophic outcomes. He stresses the urgency of exponentially increasing global coordination among AI safety advocates, aligning with concerns discussed in The Impact of AI on Society: Opportunities and Challenges.
Background and Path to PAISAI Leadership
- Maxim's decade-long AI engineering and research career, including work at a systematic hedge fund.
- Realization around 2022 of AI's accelerating capabilities after AlphaGo and GPT releases.
- Decision to join PAISAI as a volunteer in late 2022, driven by the need for public awareness and action reflected in Navigating Perspectives on Artificial Intelligence: A Call for Unity.
Challenges in France
- Difficulty engaging with French AI experts and policymakers, many of whom rely on outdated AI models or downplay risks.
- Opposition from government eager to compete in AI development.
- Despite obstacles, PAISAI France organized multiple protests, secured media coverage, published a 100-page critique of government AI policies, and held a high-profile Senate conference.
- Growth of a structured community with 2,000 supporters and over 100 paying members.
The Power of Protests and Public Engagement
- Protests serve as effective, low-cost tools to raise awareness and media interest.
- Maxim’s personal experience organizing France’s first PAISAI protest with minimal participants yet significant press coverage.
PAISAI’s Impact and Future Directions
- Success in uniting a knowledgeable community and raising widespread awareness.
- Need to evolve from a loosely coordinated activist group to a professional, scalable organization with clear governance.
- Focus on building a well-oiled machine that can orchestrate exponential growth in advocacy and action globally, an approach relevant to insights from OpenAI's Shift to Profit: A New Era of AI Governance and Innovation.
Why AI Risk Mitigation Can Be More Manageable Than Climate Change
- AI development is concentrated among a relatively small number of labs, investors, and policymakers, making targeted pressure more feasible.
- Contrary to climate change, which requires behavioral change from billions, AI safety hinges on influencing a limited group with concentrated power.
Upcoming Initiatives and Calls to Action
- Organizing PAISAI’s conference in Brussels featuring prominent speakers like Stuart Russell, with engagement of the European Parliament.
- Participation in the AI Summit in India to address the global distribution of AI risks.
- Launching a Christmas fundraising campaign to hire key staff and empower volunteers through microgrants.
- Invitation for public involvement: joining, volunteering, and donating.
Conclusion
Maxim Fores invites everyone aware of AI risks to take responsibility and contribute to a global movement that can change the trajectory of AI development. His leadership marks a commitment to transforming PAISAI into a globally influential force advocating for safe, regulated AI advancement, underscoring themes from The Godfather of AI: Jeffrey Hinton on Career Prospects and AI Risks.
A lot of people think of the situation as a race between different labs or a race between [music] the US and China.
That's not reality. The reality is there is a race between people who are trying to build a godlike [music] entity that
will most likely kill everyone and us, the people who are trying to [music] coordinate to stop them. So what we need
to do is get this coordination to be on an exponential path as well. So exciting news this month at Pori. We
are now announcing um a new director who's going to be coming in. Many of you will already know him as Maxim Fores who
has been the director of Porzi France for a while now. Um he's going to be coming in. Um, I'm personally quite
excited to see how things will change and how we can make a stronger case for the need for AI regulation going
forward. Thanks for joining us today, Maxim. Um, how's it going? >> Thank you. Uh, it's going well. I am I'm
not sure I would say that I'm excited to take this role just because it's a I think it's a tricky word when we are
talking about the stakes that we're facing. Um, but I would say that I feel very aligned.
I feel I feel the weight of the responsibility and it's heavy, but there is a a specific kind of energy that
comes when you know that you're doing exactly what you're supposed to be doing. Um, so yeah, I don't do this
because it's fun, but I do it because it's necessary, but when I look at the team that we're
building and the momentum that we have, I feel like determined, I would say, and and ready. So, in the in the last couple
weeks, we've seen uh Google released their new model um Gemini 3. Um this was after PAI's protest um back in the
summer, given that they they hadn't um lived up to some of the safety commitments that they had previously um
promised to do. Um and we've also seen some quite drastic improvements in the capabilities of um Gemini 3. So, what
what was your take on the release of Gemini 3? Well, I mean, first I would say it's
positive that uh Google has decided to release the um the safety card after all demands. On the other hand, this looks
like the most unsafe model that has ever been released. So, what we have gained in transparency, we are losing in the
amount of risk that is being taken. Uh I say that it's also I'm not sure it's going to be a nail on
the coffin of the people who believe that AI is not improving and is going to plateau.
Uh what I would say is that I hope it's going to stop improving. I hope it will plateau soon. But uh really we shouldn't
we should we should prepare for the case where it doesn't it doesn't change our actions much much because if it actually
plateaus then that's amazing. It gives us it gives us more time but we're still late right if we get AGI in 5 years
that's tricky. If we get it in 20 years that's better but we still need to prepare right now even if that is the
case. I believe you've been um head of Borai France for almost two years now. I may be wrong about that. Um but
obviously before then you also existed and you were doing things in the world. So give us a little bit of the the story
of Maxine. Before Pai existed, what were you up to and what we do? >> So before Pai, I was an engineer and
researcher in artificial intelligence. Although so I was I've been doing that for about 10 years. I wouldn't call it
artificial intelligence for most of my career. I think uh I started call calling it this way around 2020 when uh
it became a bit clearer to me that we were getting closer to the real deal. [snorts] Um
and I so I was heading a research and development team in AI in a systematic hedge fund called the true sigma. Um I
think I I started understanding what was happening better around 2022 where I started really feeling the acceleration.
Um and this is when I started uh coming to my senses and thinking about what we were doing. I it's not the same thing if
you're working on a technology that you think will actually arrive in 200 years versus when you think it's going to
arrive in 10 years. I took a I took a sabbatical to think about it and then GPT4 came out and uh that's the time
when a lot of people started waking up to uh the reality and that's when I I realized the the incredible amount of
risks that we were taking. uh and I shortly after that decided to join POSAI as a volunteer.
>> Just out of curiosity, what were your kind of timelines before the release of um GPT2, GPT3? Um because I think the
timelines shortened for almost everyone um but what how far in the future were you looking when we're talking about
things like AGI or super intelligence? I think uh before GPT2 well yeah I think the key moments for me were when uh when
Leedle the the the um one of the best world players in Go was beaten by AlphaGo at this point I
started thinking huh uh maybe we will eventually create an AGI I think before that to me it wasn't entirely clear
whether there would be a a hurdle it would just be impossible forever or something like that which could sound a
bit naive now but uh so at this point I thought maybe okay 100 years from now we we'll be there then when uh when GPT2
came out and when people started realizing the that it had emergent uh skills like it was able to build
websites run a prompt stuff like this then I started thinking oh maybe I'm going to actually see that when I'm
alive uh so I thought maybe 50 years and It's only but I wasn't thinking about this too much, right? It was really low
in my the priorities in my life just down at the back of my head. Uh I think it's only when GPT4 came out that I had
a very sudden realization that we may be 5 to 10 10 years uh from AGI. And after the the kind of um huge boom
in in AI capabilities we saw in 2022 2023 more experts began to kind of come out of the closet as it were and began
to warn very seriously about what the risks may be of advanced AI. Um and then early on in uh 2023 um you founded uh
Paul's AI. Um how long after the the founding of Porzi did you join and what were your reasons for joining? Why did
you think Paulai was an organization that was needed at the time? I >> remember correctly. I joined in November
of that year and I joined because um well I I tried to figure out how I could have the most impact. Like I thought the
situation was pretty bad. Uh but I've always been very optimistic. I think that uh people who think that we are
doomed and there's nothing to do it's mostly an excuse because uh it's very handy right if you think that then you
don't have any responsibility on your shoulders you can just chill and and look at the events unfold and like be
very cynical and and and be like oh I told you so uh I'm more of an optimistic myself I think that we can change the
course uh I noticed that a lot of people were already trying to change the course, change change the trajectory,
but they were going through the uh inside game. So, um building think tanks and trying to speak directly to
politicians, trying to speak to people in the labs, which is great. I think a lot of people should do this, but I I
thought that what was really missing is just public awareness. Um, so I I was looking for a movement or
anything that that would uh try to achieve this and POSI was here and yeah, I think it's founded on the correct
idea. I think that you cannot um put this technology on pose if you don't
have a fairly global uh awareness of the risks in the general population. So that's why I joined this movement.
>> Um, and obviously perhaps the main thing that PSAI is known for are our our protests, right? Um, and for me
personally at least, I had never attended a a protest um before joining PSAI because I had never felt um
strongly enough about something in order to feel it was um apt to to go to a protest. Um, do you had you been to a
protest previously? And why is it that protests specifically are something that you think may be needed in order to
raise public awareness and and perhaps show politicians that this is something that the public care about?
>> So yeah, first I had not I had never joined a protest in my life. uh and I ended up the first protest I joined was
actually a protest that I organized in France uh when I when I started the French branch of POI. So that was a
quite a steep learning curve for me. Um but why I think they are necessary? I think that protests are a great way to
raise attention on an issue. I tend to be more inclined towards very peaceful protests. I think they they carry more
strength. um than the more disruptive ones although um I'm open still to to uh to more ideas on that domain. Uh but
basically they have a huge uh value over cost ratio. Uh the first protest I organized in France, for example, we
were only 10 people uh in the street for one afternoon and we managed to get articles published in multiple
um multiple high-profile French newspapers thanks to this action. So that's that's really not a lot of work
to uh be heard. And you you mentioned that that protest was one of your first actions uh in PAI.
Um you've since um been the head of PAI in France um one one of our largest um national chapters. So what has your
experience been like as the head of PAI um France? Um and what was like some of the unique challenges in France? You
you've told me before that um some of the prominent figures in AI in France have been maybe a bit more skeptical
than they have been in in the UK and in America figures like Yan Lun. Um so how have you dealt with those challenges and
what's your experience been like? >> Yeah. So France was um let's call it hard mode. So yeah, you have to
understand that uh as you said France is the home of Yandun. Uh it's the home of Mistral uh which is probably the most
misaligned uh AI company in the world. Although I don't mean OpenAI is is fighting hard to to be on the first spot
at the moment. But um uh there is also a big problem of like understanding of AI because there is no there's no really
there's not much AI talent in France. uh everyone has left to go to San Francisco or London. So uh in house you have
mostly people who are advising the government who have basically no idea what they are talking about. Mostly uh
older people who are used to the old school AI which has nothing to do with uh deep neural networks and they still
feel like it's the same like it's just a series of instructions that a machine is following. Uh so that's that's a massive
problem. uh because of these people the government has no no clue what's happening or had at least no clue what's
happening when I was starting um but like despite this hostility and also part of the hostility is is the
the fact that the French government wants to push super hard on AI and and thinks that France should join the race
and and become a serious uh actor in developing uh in developing uh advanced AI In spite of this, we managed to in
just one year starting from zero. So at initially just myself trying to recruit a bunch of people uh we organized three
protests. We got coverage probably uh I don't have the number in mind but something like 30 uh articles in in
newspapers about us over the full year. We wrote a 100page counter expertise to the government's AI commission report.
Um, we managed to accumulate more than 1 million views on our media appearances. I became the uh the most well-known
public figure in France advocating for uh catastrophic risks of AI. Uh we organized a series of high-profile
conferences that culminating in a organizing a conference at the French Senate uh start like last month.
So I think that what really worked was professionalism like my strategy in France was we need
to grow across two metrics. The first one is visibility with just media appearances, protests etc. But the
second one which I think is is quite important is uh more like reputation. um being seen as credible actors who know
what they are talking about and getting access to um to basically politicians uh this way. So yeah, in a year we built a
community with about 2,000 people uh 100 paying members of the of the of POSAI France and dozens of volunteers working
in a in a very structured manner. So yeah, I really think that if we can shift the narrative in France uh which
was actively actually act actively trying to silence us, we can do it anywhere. Uh I think this is a good
proof of concept. >> Yeah, absolutely. I mean I' I've been very impressed by how well um you and
the rest of uh the people in PAI France have done. Um so what is both uh in terms of PAI in France and Pori
globally? Um we've been going for roughly two and a half years now. Um lots of things have changed um outside
in in the AI space and in uh regulation. Um so what what is your evaluation of PI's impact? What do you think we have
done well and what do you think there are where do you think there are areas for uh improvement? I'm going to give a
a fairly short answer to that one for now. But uh I think that what we've done well is gathering a large group of
people uh who understand what is happening and generally raising awareness. Uh where we have not done so
much is I think in terms of uh our way of organizing. I think that at the moment we are still
basically a group of people who are aware uh of what's happening and who want to to help.
Uh I think what we are going to do better from now on is organizing coordinating all of these people to
maximize their impact starting a ch a chain reaction. Um that's that's going to be my focus.
Mhm. >> And you said to me in a conversation, I believe this was um last week, that uh
solving the existential threat from AI is actually an easier challenge than it is to solve climate change. Um why why
is it that you think that? >> Yeah, I think it can sound counterintuitive u but
>> you have to think about the the mechanics of the problem. If you want to solve climate change and I'm not saying
that we're not going to be able to solve climate change but uh that means you that means that basically 8 billion
people need to change the way of living. You need to to change the way we use cars, change the way we eat like these
are as far as I know the two biggest uh uh factors. Um it it requires massive distributed sacrifice and you need to to
change the way the industry works uh as a whole in order to chop to stop AGI. You you don't need 8 billion people to
change. Actually you need very few people to change. You need to to stop a few maybe a few thousand people in two
countries basically. So it's a very concentrated supply chain. uh the talent is very concentrated the the capital
that flows is fairly concentrated still so we know we know where the data centers are we know who the CEOs are
that there are very few so it's all it's all bottlenecked and so because it is bottlenecked a small determined group of
people I think can push the dominoes that will end up applying enough pressure to squeeze that bottleneck
We don't need to convince the the the entire world. We we just need to convince the people who have the power
to shut it down. >> And so we already it seems like the polling shows that we pretty much
already have the support of the public. Um there are a larger and larger number of politicians it seems every month who
are beginning to take the extreme risks of advanced AI seriously. Um what do we need to do as PAI and as the the anti-
super intelligence movement um to accelerate the the process of politicians not only understanding it
but um being willing to to publicly state their their opposition to a reckless unregulated race to super
intelligence? What what are the steps that we need to take? >> Yeah. So I want to be transparent here.
Um, I'm at the moment when we are recording this, I'm not even the CEO yet. This is going to start next week.
So, I'm still in a in a listening phase. So, I'm interviewing everyone. Uh, I'm looking at the data. I'm trying to
really rock the situation at POSAI and the global situation. So, I'm not at the stage where I have a a 10-point policy
plan that I can give you today, but I would say at least I have a fairly clear vision for what we need to build.
Um, the key, I think, is to really build turn POSAI into a well a welloiled machine, right? We need to build a
structure that can handle scale. H we need to move from a group of activists to a proper organization with uh
stronger governance professional tools. Uh [snorts] we need to hire the right people and once we have these
foundations in place this is where we can really start a chain reaction. I think this is what needs to happen. uh
like a lot of people think of the situation as a race between different labs or a race between the US and China.
That's not reality. The reality is there is a race between people who are trying to build a godlike entity that will
basically most likely kill everyone and a race between us the people who are trying to coordinate to stop them. So
what we need to do is get this coordination to be on an exponential path as well. um start an exponential
where you have more and more people like the the rate of the people who are aware of the problem needs to grow
exponentially and the rate of people who are not only aware but then decide to take action join POAI or something
adjacent needs to also grow exponentially. I think that's the that's the vision. Obviously there there are
lots of other organizations other than Porzi who are involved uh in this space um all of whom are doing um different
things. So what is the niche of Porzi? What what are the unique actions that we can offer um that make our existence uh
important? >> Yeah well that's the thing most of the organizations that uh exist in this
space are very much playing the inside game. You have a lot of think tanks, you have a center for AI safety, you have
organizations like this. There are at the moment very few movements like POSAI who are trying to be very um very
popular uh to to uh enable everyone to take action. So that's our niche. That's our niche is
like going to any human and making them understand that they have a problem and it's the we have a shared responsibility
to address this problem and they can help. So the key is that we have to set up the systems so that anyone in the
world can join and start taking actions that go in the right direction and be part of the this uh this chain reaction.
>> What what are the um things that are coming up next for Porsai? So we have Porscon Brussels uh coming up um early
next year. we have the no longer the safety summit but the AI impact or AI action summit in in India. Um what what
are the things that are on the horizon that uh people in PI should know about? >> Yes. So postcon Brussels I think
February will be a very interesting month because we have a postcon in Brussels that is coming up. We now have
Stra Russell that is that is confirmed as a speaker. So that's going to be very interesting. we are uh reaching out to
other high-profile speakers and uh we will be in contact with the European Parliament. Uh I think that Europe has a
a big role to play in in what's going to unfold and around the same time there is the AI summit in India. uh and I think
that India is an interesting place because it's kind of like the bridge between
um the superpowers and the third world countries. So I think that there there are actions to be taken and we will
definitely organize some international action at this point. Uh before that uh I think starting now probably depending
on when we are releasing this video uh we are going to be launching our Christmas fundraising. So I I I
mentioned that we need to build the machine. Well machines need fuel and so um we we are currently hiring some key
roles to make the this movement professional and for this we need money. So, uh we will put forward profiles of
uh of some volunteers that have uh that have helped us uh grow so far. And uh I think with this fundraising, we're going
to try to um enable more volunteers to to help us using steepens and microrants.
Um yeah, I think if you're watching this and you've been waiting for a sign to get to get involved, I think this is it.
I think this is a good time uh now that we're going to have a new leadership. Uh the wind is at our backs and you can
join us now. Uh start being active or donate. That's a great time. Of course, links for everything Maxi mentioned
there will be available in the description and in the comments both to to join PSAI become an active
participant in in how this all plays out um and not be on the sidelines um and also if you want to uh help um fund
other volunteers um there'll be a link to the Christmas campaign as well. Um so thank you very much for joining us today
Maxim. Um looking forward to seeing what what we can do uh in Pauli going forward. Um, and yeah, thank you for
watching everyone. Make sure you join PZAI. Make sure you donate. >> Thank you. And thanks for thanks to
everyone who is watching and is aware of this issue. That's the first step.
Maxim Fores stresses that AI safety is not just about competition between countries or labs, but a critical struggle between those creating powerful AI systems and those trying to prevent catastrophic outcomes. The urgent challenge is to exponentially increase global coordination among AI safety advocates to mitigate risks effectively.
Maxim’s background includes a decade of AI engineering and research, notably at a systematic hedge fund. After witnessing the rapid AI advancements in 2022, such as AlphaGo and GPT, he volunteered at PAISAI to help raise public awareness and drive action on AI risks starting in late 2022.
PAISAI encountered challenges like French AI experts using outdated models and downplaying risks, plus government resistance focused on AI competition. Despite this, PAISAI France managed to organize protests, secure media attention, publish a comprehensive policy critique, and hold a Senate conference, growing a supportive community.
Maxim argues that AI development is centralized within a relatively small number of labs, investors, and policymakers, making targeted pressure and influence more feasible. In contrast, climate change requires behavioral changes from billions globally, which is a far more diffuse and complex challenge.
Protests are cost-effective tools that raise public and media awareness about AI risks. Maxim’s experience organizing France’s first PAISAI protest, despite limited participants, led to significant press coverage, demonstrating how grassroots activism can amplify the message and build momentum.
Maxim aims to evolve PAISAI from a loosely organized activist group into a professional, scalable organization with clear governance. Upcoming initiatives include organizing high-profile conferences, participating in global summits, launching fundraising campaigns to hire staff and empower volunteers, all focused on exponentially growing PAISAI’s impact worldwide.
PAISAI invites people aware of AI risks to join their movement by volunteering, donating, or participating in campaigns like the Christmas fundraising drive. These contributions help hire essential staff and provide microgrants to empower grassroots activism, fostering a collective effort to guide AI development safely.
Heads up!
This summary and transcript were automatically generated using AI with the Free YouTube Transcript Summary Tool by LunaNotes.
Generate a summary for freeRelated Summaries
OpenAI's Shift to Profit: A New Era of AI Governance and Innovation
Exploring OpenAI's transition from nonprofit to for-profit structure and its implications for the future of AI.
The Godfather of AI: Jeffrey Hinton on Career Prospects and AI Risks
In this insightful conversation, Jeffrey Hinton, known as the 'Godfather of AI', discusses the future of artificial intelligence, its potential dangers, and the implications for career prospects in a world increasingly dominated by superintelligence. He emphasizes the importance of AI safety and the need for regulations to mitigate risks.
Navigating Perspectives on Artificial Intelligence: A Call for Unity
In this engaging talk, the speaker explores the diverse mindsets surrounding artificial intelligence, from optimism to skepticism. They emphasize the importance of bridging these divides to foster meaningful conversations and collaborative solutions for a human-friendly future in the age of AI.
The Revolutionary Impact of Claude AI: A Game-Changer for Software Engineering
Explore how Claude AI surpasses GPT-4 and revolutionary features that redefine productivity.
The Impact of AI on Society: Opportunities and Challenges
This video features a discussion among experts on the transformative effects of AI on employment, creativity, and societal structures. They explore both the potential benefits and the risks associated with AI, including job displacement, ethical concerns, and the future of human agency.
Most Viewed Summaries
Kolonyalismo at Imperyalismo: Ang Kasaysayan ng Pagsakop sa Pilipinas
Tuklasin ang kasaysayan ng kolonyalismo at imperyalismo sa Pilipinas sa pamamagitan ni Ferdinand Magellan.
A Comprehensive Guide to Using Stable Diffusion Forge UI
Explore the Stable Diffusion Forge UI, customizable settings, models, and more to enhance your image generation experience.
Pamamaraan at Patakarang Kolonyal ng mga Espanyol sa Pilipinas
Tuklasin ang mga pamamaraan at patakaran ng mga Espanyol sa Pilipinas, at ang epekto nito sa mga Pilipino.
Mastering Inpainting with Stable Diffusion: Fix Mistakes and Enhance Your Images
Learn to fix mistakes and enhance images with Stable Diffusion's inpainting features effectively.
Pamaraan at Patakarang Kolonyal ng mga Espanyol sa Pilipinas
Tuklasin ang mga pamamaraan at patakarang kolonyal ng mga Espanyol sa Pilipinas at ang mga epekto nito sa mga Pilipino.

