Compare this with NVIDIA – http://www.nvidia.com/object/tesla_build_your_own.html
May be the best thing is go for the well structured and beautifully engineered 1 TFLOP machines using the NVIDIA GPUs. The power consumption is whopping 1,200+v, in that case. For cubieboard, the power consumption is less than 100 volts… for this small configuration, a simple linear calculation points out. It can be flexibly be built especially for companies that are starting to do, using open source systems. This is a minimalist configuration. I see future possibilities. At least this can be a great test system.
More hands on details available at: http://cubieboard.org/2013/08/01/hadoophigh-availability-distributed-object-oriented-platform-on-cubieboard/
More on cubieboard.org and its assembly: http://en.wikipedia.org/wiki/Cubieboard
For GFLOPS see: http://en.wikipedia.org/wiki/FLOPS
Accordingly, it looks like the 8GFLOPS is a serious underestimate.
One important point is that you can build a hadoop cluster but to get value out of your big data you need to hire a 100K analyst, at the least!
Anybody interested in working with me to build and support HADOOP Clusters for our clients?