top of page
Systems of Impact Logo

The State of AI in the Australian Climate Movement

September 2025

Discover how Australia's climate movement is navigating AI adoption in this groundbreaking research report.

Based on surveys of 23 climate organisations and 44 individuals across Australia, The State of Artificial Intelligence in the Australian Climate Movement reveals the complex tensions organisations face between harnessing AI's operational benefits and staying true to their environmental missions. This first-of-its-kind study uncovers widespread "shadow use" of AI tools, significant distrust of Big Tech companies, and the strategic challenges climate organisations must overcome to thoughtfully adopt AI while advocating for systemic change.

SHARE WITH YOUR NETWORK:

The State of AI.png

Download the Full Report

The report exposes a critical "intention-action gap" within the climate movement.

While nearly two-thirds of surveyed individuals use generative AI tools, over half of organisations lack formal policies to guide this adoption. Climate organisations are caught between recognising AI's potential to enhance their work and concerns about its environmental costs from energy-intensive data centres to water consumption and carbon emissions. The research reveals how these tensions are creating organisational paralysis just as national AI policy discussions progress without essential climate movement input.

Gain access to actionable insights and strategic frameworks for navigating AI adoption in mission-driven organisations.

This comprehensive 44-page report includes detailed survey findings, comparative analysis with UK studies, practical recommendations across five key areas, and real-world examples of how organisations are already experimenting with AI tools. Whether you're a climate organisation leader, policy advocate, or researcher interested in the intersection of technology and social movements, this report provides the evidence base and strategic guidance needed to make informed decisions about AI adoption while maintaining mission integrity.

Read a sneak preview below.

This project was supported by Surge Climate Talent as part of the Climate Career Bridging Fellowship.

Executive Summary

Australia is behind its peer nations on artificial intelligence (AI). While the government is taking tentative steps towards regulation, organisations are assessing the risks and opportunities of generative AI.

​​

Within the Australian climate movement, there are strong tensions between harnessing the utility of generative AI, organisations’ mission alignment, and individuals’ values. While some people wish to harness the efficiency of this technology, others are entirely resistant. Without a clear path to resolving these tensions, organisations stall or decide against using this technology outright. While climate organisations debate whether and how to use generative AI, the national policy conversation around AI progresses without their necessary and relevant perspectives to urge a shift in direction.

​

Meanwhile, the government continues to make plans based on the agenda of whoever is in the room – and Big Tech is ensuring it is present. If climate organisations are unable to resolve their internal positions, their important voices will be excluded from the national conversation.

 

This report aims to demystify some of the issues around this ever-growing topic, present a snapshot of how some organisations in the Australian climate movement are currently using (or avoiding) generative AI, and discuss ways to help resolve some of the strongly felt ethical tensions and move forward.

Context

Climate organisations must navigate a complex landscape when considering AI adoption – a technology that could enhance their work while potentially undermining their core missions and values. Individuals who work in these organisations take on a range of personas from enthusiastic advocacy to active resistance, creating additional complex inter-organisational dynamics.

 

There is genuine reason for concern. AI is driving the global expansion of data centres, which require substantial amounts of energy and water to operate, thereby significantly increasing greenhouse gas (GHG) emissions. The supply chain impacts extend into the digital world, with the training of AI reducing the quality of data commons, putting creators at risk, and spreading dis- and misinformation further and faster. Without a viable alternative, the climate movement is left with Big Tech’s profit-driven vision for AI.

Key Findings

This study examined how Australian climate organisations are navigating the adoption of generative AI technology. Two complementary surveys gathered data from 2nd July to 18th August 2025, with 23 organisations and 44 individuals across the Australian climate movement participating. The research reveals significant tensions between harnessing AI's utility and mission alignment within the climate sector.

​

Climate organisations are informally using generative AI, with limited support

​

Most climate organisations surveyed have only recently started considering generative AI tools. Uptake is relatively high, and where it’s being used, it is done so frequently and informally. Most organisations are primarily using or testing generative AI for efficiency gains, but some are explicitly using it to solve capacity issues.

​

Most organisations do not have policies related to generative AI, though those with more resources or digital capacity are slightly further ahead. Potentially as a result, “shadow use” of generative AI tools—when staff members use AI tools without express permission—is very high in the climate organisations surveyed.

​

Climate organisation staff, even those who are not actively using it, believe they are highly competent in using generative AI. Paradoxically, many are not as confident in their skills. Decision-makers in organisations generally do not have a good read on their own staff’s generative AI skills and capacity.

​​

The uptake of generative AI in climate organisations appears to stem from staff initiatives, rather than management actions. There is currently little active, internal training or support available for staff or volunteers in these organisations, with a minor exception for those with more resources.

 

Despite a large appetite for skills and knowledge development in generative AI, most organisations are not taking many steps to meet this demand. Most are not yet seeking external support on generative AI, or are not well-resourced to do so.

​

Climate organisations are extremely conflicted with generative AI

​

Climate organisations are navigating an extremely complex context when considering adopting generative AI, and a large portion of organisations are not yet using these tools.

​

Generally, climate organisation stakeholders are torn – understanding the benefits that generative AI could bring to their work, while recognising that it may not align with their missions. Many are concerned with different aspects of generative AI, and stakeholders heavily consider how the models are created as well as the broader societal effects. Compared to other social sector organisations, climate movement stakeholders tend to have higher levels of distrust of both AI products and their makers. There was also some active resistance to these tools in the organisations surveyed.

 

There is much uncertainty in climate organisations about the role that AI and generative AI will play towards improving public good, but decision-makers are largely optimistic. However, many climate organisations are still in the early stages of engaging with AI and have unresolved ethical conflicts and are not yet engaging in wider (beyond internal) debate, though there is an appetite for AI-related campaigns or workstreams within the movement.

Recommendations

The survey results reveal that climate organisations are caught in an "intention-action gap" – using generative AI tools despite value conflicts due to a lack of viable alternatives to Big Tech's vision. To address this tension, organisations should adopt a three-layered approach: conscious consumption (selective use of AI tools with lower environmental impact), supply chain awareness (supporting open-source alternatives and data commons), and directional advocacy (pushing for systemic change toward more sustainable AI development). Similar to driving cars while advocating for better transport policy, organisations can use current tools strategically while working toward systemic alternatives. The following five areas of recommendations cover all three layers:

1. Internal Collaboration and Knowledge Sharing

Organisations should encourage experimentation with generative AI across different functions, with staff sharing knowledge and approaches with colleagues outside their job functions. Technology leaders should take time to understand the skills, knowledge and concerns in their organisation, and management and board leaders should be engaged, approaching conversations with curiosity and linking use to impact measurement.

2. Policy Development and Risk Management

Organisations should consider introducing broad principles to allow experimentation within safe limits, taking time to understand how it could be used in their context before developing comprehensive policies. Staff and volunteers should be consulted in policy development, especially because many are already using generative AI without express permission.

3. Community Building and Movement-Wide Collaboration

The Australian climate movement is already widely collaborative, and organisations should build on this by being open about their AI policy development processes and actively sharing approaches and learnings with allied organisations. Individuals should seek out or create communities of practice, and organisations should actively work together to discuss opportunities and challenges related to generative AI.

4. Resource Development and Training Support

The Australian climate movement is underresourced and ill-equipped to address the AI transition. It should seek support to develop and compile freely available, high-quality training resources relevant to climate organisations, and workshop ways to constructively address the unique mission tensions related to AI adoption.

5. External Advocacy and Industry Regulation

Climate organisations are not engaging in wider AI discussions despite believing that external developments are relevant to their missions. Australian climate organisations should form a coalition to campaign on AI-related topics and bring a unified voice to government discussions. To support this, participating organisations should work together to clarify their vision of AI development, utilising elements from Public AI and the Within Bounds statement, campaigning for AI development to follow this vision.

The Australian government’s "wait and see" approach to AI regulation creates an opportunity for climate voices to influence policy before Big Tech consolidates power. However, organisations need concrete alternatives to advocate for – moving beyond individual consumption choices toward coordinated advocacy for an AI industry that serves environmental and social justice rather than purely profit motives.

Continue reading the full report

To continue reading, download the full report by entering your details above.

CONTACT

Let's make a difference together.

Ready to level up your for-purpose organisation? Contact me today to start the conversation.

  • LinkedIn
What services are you interested in?

I'll be in touch soon!

© 2025 Systems of Impact. Privacy Policy.

bottom of page