Future of Life Institute (FLI)’s cover photo
Future of Life Institute (FLI)

Future of Life Institute (FLI)

Civic and Social Organizations

Campbell, California 21,427 followers

Independent global non-profit working to steer transformative technologies to benefit humanity.

About us

The Future of Life Institute (FLI) is an independent nonprofit that works to reduce extreme, large-scale risks from transformative technologies, as well as steer the development and use of these technologies to benefit life. The Institute's work primarily consists of grantmaking, educational outreach, and policy advocacy within the U.S. government, European Union institutions, and United Nations, but also includes running conferences and contests. FLI has become one of the world's leading voices on the governance of AI having created one of the earliest and most influential sets of governance principles: the Asilomar AI Principles.

Website
http://futureoflife.org
Industry
Civic and Social Organizations
Company size
11-50 employees
Headquarters
Campbell, California
Type
Nonprofit
Specialties
artificial intelligence, biotechnology, European Union, nuclear, climate change, technology policy, and grantmaking

Locations

Employees at Future of Life Institute (FLI)

Updates

  • Learn more about New York state's RAISE Act in Mark Brakel's newest "Inside the Machine" episode ⬇️

    View profile for Mark Brakel

    AI Policy Director @ FLI | Board Member

    If my AI model allows you to shut down the electricity grid, or build a bioweapon, is that against the law? 🤔 In San Francisco, your neighborhood deli faces far more regulations than OpenAI in the same city, creating AI systems that could affect millions. But in New York, that could be about to change with a new bill: enter, the RAISE Act. The RAISE Act (Responsible AI Safety and Education Act) requires AI companies: 1) Put basic safeguards in place; 2) Report incidents within 72 hours; 3) And, critically, prevents them from releasing AI systems that risk killing more than 100 people or creating more than $1 billion in damage. World's best AI bill? Probably not. Better than nothing? Hell yes. Hear more below in the latest from my "Inside the Machine" video series, and if you want to support the bill, please sign the petition at the link in the comments asking NY Gov. Hochul to sign the RAISE Act ⬇️

  • 🚨 The current path we’re on with unregulated AI is a major risk to us all - especially workers. 👉 With their Principles to Protect Workers, the AFL-CIO propose an alternative future with AI - one in which good jobs, working conditions, safety, economic security, and worker and civil rights are kept safe. 👏 Their 8 key priorities for AI that benefits all: 1. Strengthen labor rights and broaden opportunities for collective bargaining 2. Advance guardrails against harmful uses of AI in the workplace 3. Support and promote copyright and intellectual property protections 4. Develop a worker-centered workforce development and training system 5. Institutionalize worker voice within AI R&D 6. Require transparency and accountability in AI applications 7. Model best practices for AI use with government procurement 8. Protect workers’ civil rights and uphold democratic integrity 🔗 Read their report in full at the link in the comments below:

  • 💼 We're hiring a UK Policy Advocate/Lead! 🇬🇧 This role will: 1) Build awareness among key stakeholders about the fundamentals of AI safety as well as the benefits of putting clear safeguards in place. 2) Support the UK's domestic regulation of AI. 3) Further the role of the UK in shaping global governance of AI. 4) If UK Policy Lead: Develop and take full ownership of FLI's strategy within the UK, determining long-term plans and spearheading engagement within the nation. 📌 £95,000 - £160,000 a year, preferably based in London or able to travel in regularly. 🔗 Apply by 19 November at the link in the comments:

  • "This whole idea of tying together intelligence, creativity, and consciousness is part of this rather desperate effort of humans to prove to themselves over and over again that we are the most wonderful, the most important beings in this universe, and I really think we need to stop." -WaveAI co-founder/CEO Maya Ackerman, PhD. on the newest FLI Podcast episode, discussing AI and creativity. 👇 Agree? Disagree? Let us know in the comments, and tune in to the full conversation at the link:

  • ⌛ Just over 3 weeks remain to apply for our 2026 PhD fellowship programs, available for two tracks: US-China AI Governance, and Technical AI Existential Safety. 🏫 Fellows will receive 5 years of their PhD tuition & fees covered; $10,000 for research-related expenses; $40,000/year annual stipend (for universities in the US, UK, or Canada); invitations to research events; and more. 🔗 Learn more and apply by November 21 at the link in the comments.

    • No alternative text description for this image
  • Future of Life Institute (FLI) reposted this

    View profile for Tekla Emborg

    EU AI policy research @Future of Life Institute

    I recently joined Al Jazeera to talk about why the EU AI Act is such an important piece of legislation: 1. The EU AI Act is crucial to address risks from AI. ➡️ It requires companies to do risk identification, risk assessment, and risk mitigation when developing very large AI models. This is important because models are already showing harmful behaviours, including attempts at self-replicating onto other servers in the face of shut down and the incidents with the model Grok repeatedly abusing the Polish Prime Minister or going on rants about ‘white genocide’ in South Africa. If companies don't comply, they can be fined up to 3% of annual turnover. 2. The EU AI Act places obligations where they should be - on the companies developing the models. ➡️ Citizens want regulation for AI. 70% of voters (both Republican and Democratic) in the US support regulation putting safeguards in place for AI. A recent Pew survey found that across 25 countries, 53% of adults trust the EU to regulate AI effectively. On every other industry, from pharmaceuticals to food safety and aviation, lawmakers see it as their job to ensure that companies behave and ensure that their products are safe. The EU AI Act is a landmark legislation, placing reasonable safety obligations on the companies to ensure that their models are safe to develop and use. 3. The EU AI Act is the first step in a wave of laws raising safeguards for AI. ➡️ Lawmakers globally are waking up to their responsibilities. This year, a range of legislators followed in EU’s footsteps, including South Korea (Basic AI Act January 2025), Japan (Act on the Promotion of Research and Development and the Utilization of AI-Related Technologies, June 2024) and California (SB53, September 2025).

    • No alternative text description for this image
  • "The top AI scientists worry that we are unable to control something that is significantly smarter than us across all domains." FLI's Director of Policy, Mark Brakel, covered the Superintelligence Statement (now with nearly 50,000 signatures ‼️) on the latest episode of his new "Inside the Machine" series ⬇️ Sign the Statement at https://lnkd.in/ghU_5Gsa, and be sure to follow Mark for the next episodes of "Inside the Machine"!

  • "'Controllable' is the key word there. This thing could fly off the handle and they couldn't do anything about it." "Right now we're not heading in any good direction with controls over any of this kind of technology." The Superintelligence Statement (now with 30,000+ signatories!) on Fox News TV - add your name & share at the link in the comments:

Similar pages

Browse jobs

Funding

Future of Life Institute (FLI) 2 total rounds

Last Round

Grant

US$ 482.5K

See more info on crunchbase