Table of contents

On August 20, Greenplaces hosted an engaging webinar, AI & sustainability: how tech companies can accelerate climate impact. The session brought together sustainability leaders from Sightly and TripleLift, alongside AI for Good researcher Nathaniel Burola, to explore how artificial intelligence can both accelerate progress and challenge climate goals.

The discussion highlighted the environmental footprint of large language models (LLMs), practical strategies for responsible AI use, and the regulatory shifts requiring organizations to measure and disclose AI-related emissions.

The environmental cost of AI

Panelists underscored that AI is not impact-free. Training a single large model can consume enough electricity to power 5,000 U.S. homes for a year, while everyday use continues to drive demand for electricity, cooling water, and network bandwidth.

For example, ChatGPT alone consumes an estimated 2.5 million liters of water daily to serve global prompts. As usage scales, inference can quickly outpace training in terms of cumulative environmental cost.

In the Q&A, panelists noted that many companies underestimate the everyday cost of inference. Nathaniel Burola explained that most client questions he hears are less about training emissions and more about the water and energy required for a single ChatGPT query. He emphasized that understanding these micro-costs is critical for setting realistic sustainability goals.

Practical steps to reduce AI’s footprint

The panel offered several actionable takeaways for organizations:

  • Model selection: Smaller, faster models often meet business needs with lower energy intensity.

  • Prompt optimization: Techniques like caching repeated queries, concise chain-of-thought prompting, and batch processing reduce unnecessary compute cycles.

  • Workload scheduling: Running non-urgent AI tasks during low-carbon grid hours can significantly cut emissions.

  • Measurement and transparency: Tracking token usage, data center region, and total query volume allows businesses to make informed decisions—and communicate impact credibly.

  • Governance: Establishing policies that weigh both business value and environmental impact prevents AI overuse and aligns technology choices with brand values.

During the Q&A, panelists expanded with concrete examples. Maeve Gordon shared that her team runs regular “data hygiene” cadences, deleting inactive accounts and redundant data to reduce unnecessary storage and compute waste. Kate Fiala highlighted that some companies are even “training models not to answer trivial, easily searchable questions” as a way to conserve compute resources. These small practices, panelists stressed, can add up to meaningful impact.

Regulatory readiness

California’s SB 253 and SB 261 will soon require companies above certain revenue thresholds to disclose AI-related emissions and climate risks. Panelists emphasized that organizations should start tagging AI-related usage data now, integrating it into Scope 2 and Scope 3 disclosures.

Compliance isn’t just about avoiding penalties; it can also build trust with investors and customers, improve operational efficiency, and position companies as leaders in sustainable innovation.

In response to an audience question, panelists advised that companies operating in California should not wait until 2026. Starting now with AI-specific tagging of emissions and building governance structures around AI use were recommended as proactive steps. This approach, they argued, not only ensures compliance but also helps avoid reputational risk when customers begin asking about AI’s climate impact.

Balancing innovation and responsibility

Throughout the session, panelists stressed that sustainable AI is not a limitation but an opportunity. Businesses that integrate AI with intention, using it where it adds the most value while minimizing unnecessary impact, can differentiate themselves in a crowded market.

As Nathaniel Burola noted, “It’s about evaluating not just the immediate benefits of AI, but also the downstream environmental and social impacts before scaling.”

The Q&A reinforced this theme. When asked how to balance governance with innovation, panelists emphasized multi-stakeholder engagement, bringing sustainability, product, and engineering leaders into the decision-making process early. This cross-functional governance helps ensure AI delivers business value without creating hidden environmental costs.

Key takeaway

AI is transforming industries, but it comes with real environmental costs. Companies that measure, optimize, and disclose their AI footprint will not only stay compliant but also strengthen credibility and unlock competitive advantage.

For organizations just beginning this journey, the panelists agreed: ask first whether AI is truly necessary for a given task, start small, measure everything, and make sustainability a core part of your AI strategy from day one.