Credit: Shutterstock / Troyan
Over the past year or so, there’s been plenty of head-scratching as to how to meaningfully measure a server’s power performance: that is, how efficiently it uses energy to do its work. This kind of metric is important as datacenter operators struggle to keep energy costs down and free up floor space — without sacrificing service quality.
Plenty of folks have invested resources and brainpower in the task, from independent analysts such as Neal Nelson and Associates and InfoWorld’s chief technologist Tom Yager to large-scale organizations such as The Green Grid and even the EPA.
Thus, I was rather intrigued to learn this week that SPEC (Standard Performance Evaluation Corporation) has announced what it deems “the first industry-standard benchmark that measures power consumption in relation to performance for server-class computers.” It’s called SPECpower_ssj2008, a name that doesn’t so much roll off the tongue as ooze — but what’s in a name, anyway?