
The Measurement Gap: Why Tracking Energy in Serverless Is So Hard

Nick
January 19, 2026
Serverless platforms like Vercel are optimized for convenience and scalability, not for transparency. As a result, measuring their energy consumption is far from straightforward. Abstractions, multi-tenancy, and missing transparency hide the physical reality behind your code. Developers end up relying on proxy metrics that describe something, but never the whole story. Until cloud providers expose real energy data, we're making educated guesses in the dark.
The Invisible Problem
You can polish your app, shrink your bundle, and hit perfect Lighthouse scores. But when someone asks how much energy your system actually uses, the only honest answer is: we don't really know. These metrics are great for performance, but they reveal little about the actual watts consumed behind the scenes.
Serverless is built on the idea that you shouldn't care where your code runs. Efficient for teams, but terrible for measurement. This abstraction is great for developer productivity, but it disconnects software from the hardware realities that drive energy consumption. At the same time, these very abstractions allow serverless providers to utilize their hardware extremely efficiently, which often makes the platforms ‘greener’ despite poor measurability.
Why Measuring Serverless Energy Is So Difficult
Anywhere, Everywhere
Your function doesn't run on "a server." It can run anywhere, across hundreds of machines in multiple regions. Traditional hosting allowed you to measure physical energy directly. Serverless removes that possibility completely.
Multi-Tenancy
A serverless function runs on a shared machine. Maybe 50 other functions executed on that same hardware at the same time. Even if we knew the server consumed 150 watts in that moment, attributing a fair slice to your function is impossible without knowing CPU usage, memory bandwidth, and I/O patterns for every tenant. Cloud providers have this data internally, but exposing it in real-time for millions of functions creates both technical and competitive challenges.
Platform-Specific Opacity
While the challenges above apply to all serverless platforms, providers differ in what they expose. Vercel, built atop AWS Lambda, inherits Lambda's limitations but adds another abstraction layer. You get deployment metrics and edge network stats, but the underlying Lambda execution details remain hidden. Other platforms like Cloudflare Workers or AWS Lambda expose slightly different metrics, but none provide actual energy consumption data.
What We Can Actually Get
Vercel exposes only a handful of metrics:
- Execution time: How long your function ran
- Memory used: Actual memory consumed during execution
- Memory allocated: Total memory reserved for the function
- Region: Geographic location where the function executed
- Cache status: Whether the response was served from cache
- Instance ID: Unique identifier for the container instance These allow trend analysis, but do not capture the complete picture. From those, we estimate energy with proxy metrics like memory-time:
This simplified model assumes energy scales linearly with memory and time. In reality, CPU-intensive operations, I/O wait states, and memory access patterns drastically change energy profiles. Two functions with identical memory-time values can have wildly different energy footprints depending on whether they're doing complex computations or just waiting on network calls.
The formula works for spotting trends within your own application but fails for absolute measurements or cross-platform comparisons. That’s why many teams end up using costs themselves as a proxy. Since they arise from runtime, memory, and network usage, they reflect the same resources that also consume energy.
The Missing Layers of Energy
Even if compute energy were visible, three big components remain hidden:
PUE (Cooling and Infrastructure)
Power Usage Effectiveness (PUE) measures data center efficiency by comparing total facility power to IT equipment power. A PUE of 1.0 is perfect (all power goes to IT equipment), while higher values indicate more overhead from cooling and infrastructure.
Data centers consume additional energy for cooling and overhead. AWS reports a global average PUE of 1.15 in 2024, meaning every 100 watts of server power requires an additional 15 watts for cooling and infrastructure. AWS's best-performing site in Europe achieved a PUE of 1.04 (AWS Data Center Sustainability).
However, PUE varies significantly by data center, season, and current load. A function running in a newer facility during winter might have a PUE of 1.10, while the same code in an older tropical data center could see 1.40. Without invocation-level PUE data, you're working with statistical averages that hide significant real-world variation.
Network Energy
Every API call travels through routers, switches, and networks. This energy is real but never attributed to your workload. For instance, a GraphQL query that traverses multiple microservices may appear lightweight in compute metrics. Yet the network devices forwarding the request consume additional energy, often comparable to the compute energy itself in low-utilization scenarios.
Embodied Carbon
Embodied carbon refers to the greenhouse gas emissions from manufacturing, transporting, and disposing of hardware. For a typical server, embodied carbon represents approximately 30% of total lifecycle emissions, with the remainder coming from operational energy use.
A mainstream server generates approximately 1,726 kg CO₂e in embodied emissions over its lifecycle (Data Centre and Server Hardware Carbon). Amortized over an expected 6-year lifespan (AWS Server Lifespan), this equates to roughly 288 kg CO₂e per year. When allocated across millions of function invocations, this might seem negligible per request. But at scale, embodied carbon represents 15-30% of a data center's total lifetime emissions depending on usage patterns and hardware refresh cycles.
No one can say precisely how much belongs to your single function invocation without detailed allocation models that cloud providers don't currently expose.
What We Can Do Today
Even without perfect visibility, there are practical steps:
Track Proxy Metrics Consistently
Relative changes matter more than absolute numbers. If memory usage or duration spikes 40% after a deployment, energy consumption likely increased proportionally. At the same time, costs can be used here as metric. If execution becomes more expensive, this usually also means higher energy consumption. Proxy metrics signal a real increase that deserves investigation, even if the exact watt-hours remain unknown.
Detect and Reduce Cold Starts
Cold starts occur when a serverless function initializes for the first time or after a period of inactivity, requiring the platform to load dependencies, establish connections, and configure the execution environment. Research indicates cold starts can introduce overhead consuming several times more energy than warm function invocations.
Analyze instance IDs to identify cold starts. Smaller bundles, fewer dependencies, and strategic caching reduce both frequency and initialization overhead.
Optimize Observable Metrics
These correlate strongly with real efficiency:
- Less memory means lower DRAM refresh power and smaller memory bandwidth
- Shorter duration directly reduces CPU time and energy
- More cache hits eliminate redundant computation and network transfers
- Fewer retries and errors avoid wasted work that consumed energy without producing value
- Lower Costs indicates less computing and memory usage
Include Regional Carbon Intensity
Carbon intensity measures how many grams of CO₂ are emitted per kilowatt-hour of electricity generated (Carbon Intensity by Country (Electricity Maps)). Global average carbon intensity is approximately 481 grams CO₂ per kWh, but regional differences are dramatic.
Norway's power sector, relying heavily on hydropower, produces just 30 grams CO₂ per kWh. In contrast, India's coal-dependent grid generates 700+ grams CO₂ per kWh, a nearly 24x difference. Weighting proxy metrics by traffic distribution and regional carbon intensity provides a more realistic footprint estimate than assuming all energy has equal environmental impact.
The Path Forward
Cloud Providers
We need real transparency: per-function energy use, real-time PUE values, network energy attribution, and standardized reporting formats like the Software Carbon Intensity (SCI) specification, now recognized as ISO/IEC 21031:2024.
The technical infrastructure exists to measure this data. It's already collected for capacity planning and billing. The challenge is surfacing it without exposing competitive insights about data center efficiency or hardware configurations. A realistic first step might be aggregated energy metrics at the project or account level, letting teams track trends without revealing infrastructure details.
Developers
Ask for energy data in vendor selection criteria. Compare platforms not only on price and performance but also sustainability commitments and measurement capabilities. Share measurement approaches, normalize talking about energy efficiency in code reviews, and treat energy as a first-class metric alongside latency and cost.
The Industry
Establish shared benchmarks, publish APIs for energy data, and integrate energy tracking into CI/CD pipelines. As a concrete example, open-source CI tools could track energy per build step, letting teams see the carbon cost of running tests or deploying new features automatically. When energy becomes as visible as build time, optimization becomes routine rather than exceptional.
Why This Matters
The ICT sector currently accounts for approximately 4% of global greenhouse gas emissions, probably increasing drastically if current trends continue. Serverless adoption is exploding. Even if using serverless is ‘greener’ compared to traditional hosting, small improvements that scale can lead to noticeable savings.
You might not see the exact watt-hours, but optimizing code for speed and efficiency almost always means lower energy consumption and a smaller carbon footprint. A 20% reduction in execution time across billions of invocations translates to megawatt-hours saved monthly.
Conclusion
Even though the limits of our measurability still constrain us today, serverless architectures already represent a step in the right direction. They enable an overall ‘greener’ use of computing resources. The lack of perfect metrics isn't an excuse for inaction. Optimize what you can measure, reduce what you can influence, and keep pushing for transparency.
The planet doesn't care if our numbers are perfect, only that we act on the information we have and continuously improve.
Sources and Further Reading
Green Coding
Green IT
Carbon Footprint
Sustainability
Energy Tracking
Serverless Energy Usage
Cloud Efficiency
Read also

Philipp, 12/03/2025
AI chatbots in everyday business life: How to maintain control over your data

Nick, 11/12/2025
Green Coding: A Developer's Guide to Sustainable Software
Green Coding
Green IT
Carbon Footprint
Sustainability

Michael, 10/20/2025
Optimizing Vercel for Lower Carbon Emissions: A Developer's Guide
Vercel
Next.js
Green Software
Sustainable Development
Carbon Emissions
Developer Guide