Though often perceived as intangible, the cloud is undeniably physical. The growing digitalization of our economy, which users experience through the simplicity of a screen, relies on an extensive network of servers spread across the globe, consuming massive amounts of energy. Like its competitors, Amazon Web Services (AWS), the global leader in cloud services, faces the ongoing challenge of handling escalating data processing and storage while reducing towering operational costs and protecting the environment.
The recent announcements made by AWS at its annual Re:Invent event, which gathered over 60,000 attendees in Las Vegas, emphasize the company’s commitment to advancing infrastructure to enhance energy and operational efficiency. These innovations aim to lower costs for users while addressing the growing environmental impact of cloud computing, particularly as artificial intelligence and machine learning continue to demand unprecedented levels of computational power.
AWS has made significant strides in its sustainability goals, achieving a 14% reduction in carbon intensity while expanding capacity for customers by 10%. Through innovative data center designs, mechanical energy usage has been cut by 46%, and embodied carbon—the emissions associated with construction and materials—reduced by 35%. AWS has also extended the lifespan of its S3 hard drives by 2 years, incorporated 30% recycled or biobased plastic into its products, and recycled or sold 23.5 million components on the secondary market. Over 99% of materials sent to Amazon Reverse Logistics Hubs are reused, recycled, or resold. With its latest server designs, AWS has achieved a near-perfect Power Usage Effectiveness (PUE) of 1.08, reflecting exceptional energy efficiency (a PUE of 1 is ideal).
A cornerstone of AWS’s strategy is its approach to power this next-generation infrastructure. The company announced three agreements to develop small modular nuclear reactors (SMRs) as part of its effort to achieve carbon neutrality by 2040. These SMRs, with their smaller physical footprint and faster construction timelines compared to traditional reactors, will enable AWS to sustainably meet the growing energy demands of its data centers.
In partnership with Energy Northwest, AWS plans to develop four advanced SMRs in Washington State, with an initial capacity of 320 megawatts, expandable to 960 megawatts—enough to power over 770,000 homes in the United States. Additionally, AWS has signed an agreement with Dominion Energy to explore the development of an SMR near the North Anna nuclear power plant in Virginia, which will supply at least 300 megawatts of energy to the region.
Finally, AWS acquired the Cumulus data center campus for $650 million—a strategic facility located next to the Susquehanna nuclear power plant in Pennsylvania. This acquisition positions AWS as one of the first major tech companies to directly link its infrastructure to an existing nuclear energy source.
AWS also announced the implementation of liquid cooling systems in its data centers to more efficiently manage the heat generated by intensive AI workloads. This technology offers significant advantages, including reduced energy consumption and increased processing density within the same physical space. By eliminating the need for components like fans and traditional air conditioning systems, liquid cooling optimizes resource usage and reduces the carbon footprint of AWS operations.
A key part of the green agenda is also the expanded adoption of Graviton, AWS’s custom chip for general-purpose workloads, now utilized by 90% of the company’s top customers. Designed for efficiency, Graviton reduces energy consumption and operational costs, aligning with AWS’s sustainability goals. Additionally, the company showcased updates to Trainium, its specialized chip for AI model training, which offers energy-efficient performance tailored to the needs of machine learning workloads.
Another groundbreaking announcement was the introduction of UltraServers, which integrate multiple Trainium2 chips into a single unit. These servers are engineered to optimize the training of large-scale AI models, enabling entire models to run on a single node. This design reduces communication bottlenecks between nodes, significantly lowering latency and maximizing system performance, while also consuming less energy per operation.
AWS also unveiled Project Rainier, a collaboration with Anthropic to build an EC2 UltraCluster composed of UltraServers powered by Trainium2 chips. This infrastructure, capable of delivering over five times the exaflops currently used to train the latest AI models, represents a leap forward in sustainable high-performance computing. By consolidating processing power into fewer, more efficient systems, Project Rainier demonstrates AWS’s focus on balancing the computational demands of AI with energy-conscious design.
The company is also expanding its partnership with Nvidia, whose GPUs remain central to many of its server networks. AWS announced the upcoming availability of EC2 P6 instances, equipped with Nvidia’s next-generation Blackwell GPUs, expected in 2025. These instances promise up to 2.5 times the computational performance of their predecessors, providing users with highly efficient options for energy-intensive AI workloads.
Through these innovations, AWS is not only addressing the operational demands of artificial intelligence but also redefining how cloud infrastructure can meet the dual goals of cost-efficiency and environmental sustainability. As the adoption of AI accelerates, these advances position AWS as a leader in mitigating the energy challenges associated with this transformative technology.