Wednesday 25 March 2015

10 Big Misconceptions About Cloud Computing

10 Big Misconceptions About Cloud Computing

Bottom of Form

Top of Form

Bottom of Form

 

1: My current computer systems will work just as well in the cloud as they do today.

Sadly, no. A network requires servers that can be set up either locally or in the cloud. However, servers in the cloud are shared and the management of that sharing incurs performance overhead. This performance hit could impact specialized industry systems designed for on-site servers. As a user, you do not have control over when that might happen.

2: My current means of working with very large sets of data will be the same -- if not better -- in the cloud.

Not true. The speed of the connection between where you access your data and where it is stored in your cloud might not be as fast as the high speeds you may be used to with an on-site server.

3: Applications I'm accustomed to using throughout my organization will work seamlessly after their support systems go to the cloud.

False. Using the cloud to host any application also means moving all of its supporting elements into to the cloud. While this shift can be beneficial, if access to the cloud is interrupted in any way, productivity could grind to a halt.

4: For my organization, the cloud is an either-or proposition: I can either be in the cloud or I can keep my current setup with physical servers.

In reality, the most effective way for an organization to see the benefits of the cloud is to use both setups simultaneously as they slowly transition into the cloud.

5: Virtualizing my servers is all I need for my company to succeed in the cloud.

Virtualizing is the process of taking a given task into the cloud, where a physical server creates a 'virtual machine' to help you complete it more quickly than you could on your own. But a virtualized server by itself is not enough to succeed. Just like there is more to a vacation than choosing the destination, success in the cloud relies on the automated management infrastructure around the server working well -- like packing the right clothes for that getaway.

6: The only way to keep hackers from breaking into my cloud is to build my own.

Not true! In fact, the variety of attacks a cloud sustains can actually make it more secure. That's because the engineers protecting the network will be able to identify and correct more weaknesses. But that doesn't mean you need to build your own cloud. As your security needs grow, any increase in resources directed towards securing your cloud can provide an advantage, whether in money saved or attacks defeated.

7: All I need is a cloud to save money on my IT needs.

Not so fast. The cloud is able to easily adjust the amount of computing power you're using, giving a lot of flexibility to your budget. Focusing on cost alone, though, and not investigating how you might achieve significant efficiencies with new cloud technologies after you migrate could diminish your return on the cloud investment.

8: Once I'm in the cloud, I can help employees be more productive by giving them apps for their smartphones.

They keys to a successful app are often misunderstood. While a cloud's ability to provide enormous computing power can help an app succeed, other factors can be equally important, like whether the app will work without a network connection. A hybrid approach combining local and offline data storage while interfacing with the cloud on an as-available basis is one best practice.

9: It is easy to change from one cloud provider to another whenever I want to.

Not true. In fact, the bottom lines of many niche cloud providers require them to lock in their customers, typically with long-term contracts or painfully high early termination fees. If you don't go with an industry-leading provider, make sure to read all the fine print and get a professional second opinion.

10: I'm worried that my cloud provider is spying on my activity in their cloud.

With privacy on many minds these days, the multi-billion-dollar cloud computing industry could collapse if even one major cloud provider was caught snooping on their user's data -- or helping others do so. These providers are actually building security mechanisms to guarantee they themselves cannot access the data.

The HP Apollo 8000 System: Advancing the Science of Supercomputing

The HP Apollo 8000 System: Advancing the Science of Supercomputing

 



The HP Apollo 8000 System is the world's first warm water-cooled supercomputer with dry-disconnect servers, delivering liquid cooling without the risk. Because water cooling is 1,000x more efficient than air 1, you can dramatically increase the performance capacity of your data center. At the same time, you can eliminate the need for expensive and inefficient chillers, and enable the reuse of hot water to heat your facilities.

This converged system has up to 144 x 2P HP ProLiant Servers per Apollo f8000 Rack with plenty of accelerator, PCIe and throughput options to meet supercomputing workload needs. Get started today with one scalable HP Apollo f8000 Rack and one intelligent Cooling Distribution Unit (iCDU) Rack. It comes packaged with InfiniBand fabric, the HP Apollo 8000 System Manager, modular plumbing kit, and HP Apollo Services tailored for your needs.

The HP Apollo 8000 System's modular, rack-level, innovative design makes it quick and easy to install, monitor, and maintain without the risk of leaks when disconnecting liquid connections. So now you can change the world with your research while lowering your energy bills and CO2 emissions at the same time.

 

Whitebox server vendors bash HP, Dell and IBM

Whitebox server vendors bash HP, Dell and IBM

Unbranded units made by likes of Quanta now account for 15 per cent of global server shipments, according to Dell'Oro

HP, Dell and IBM have been given a bloody nose by the meteoric rise of the whitebox server market, according to analyst Dell'Oro.

The market watcher told CRN it estimates that non-branded servers made by Asian ODMs such as Quanta accounted for a record 10 per cent of server revenues and 15 per cent of server shipments in the final quarter of 2014.

The market has been fuelled by demand for whitebox servers from the so-called big four cloud providers of Amazon, Facebook, Google and Microsoft, each of which now has an installed base of more than one million servers in their datacentres.

The cloud market accounts for more than a quarter of total server shipments, Dell'Oro said.

Dell'Oro defines whitebox units as servers made by ODM contract manufacturers that go to end users directly, rather than through branded server vendors such as HP. These include not only Quanta but also Inventec, Wistron and Wiwynn.

"We expect whitebox server vendors to continue to gain market share driven by cloud datacentre deployments and some large enterprises which follow the best practices of the leading cloud datacentres," Dell'Oro Group director Sameh Boujelbene told CRN.

Boujelbene added that the top three branded server vendors had adopted different strategies to adjust to the new competitive landscape.

"Dell went private to be able to realign its strategy without having to worry about quarterly results and stock price pressure," she said.

"IBM exited the low-margin server segment and divested its System X server business to Lenovo, and HP formed a partnership with Foxconn, to be able to offer lower prices and more customised server

 

The Decline of Data Center Server Giants Dell, HP, and IBM

The Decline of Data Center Server Giants Dell, HP, and IBM

Note: With the advent of cloud computing, more and more Internet companies are getting hosting services from a wider variety of places. Try NetHosting's cloud hosting today for a great price and unbeatable service.

Intel sales indicate that instead of the big three making up most of their business, the chip maker sells its products to a wider variety of companies.

Intel is one of the biggest processor manufacturers in the world, which is why it has the cold hard data about which server companies are declining based on reduced processor orders. Head of Intel's data center group Diane Bryant says that the server giants that always come to mind first, Dell, HP, and IBM, may no longer be the big three based on Intel's sales figures.

Four years ago in 2008, Bryant recalls the big three (HP, Dell, and IBM) buying the vast majority of the chips that the company sold. Seventy-five percent of Intel's revenue that year was from selling processors to those companies. Now however, everything has changed, says Bryant. Eight server makers now comprise three-quarters of Intel's processor sales. One of those eight is Google, which doesn't even sell the servers it makes; it creates them for internal uses.

About ten years ago, Google decided to experiment with building its own servers and data centers to save money and time. Time has shown that the decision was a wise one, as Google has grown exponentially in the past ten years and the company's revenue has increased in leaps and bounds. Google started the movement and other companies began following suit, drawing business away from the likes of HP, Dell, and IBM. Additionally, more and more companies (like Facebook and Amazon) are buying servers directly from original design manufacturers (ODMs) in Asia, which also saves time and money. Some of those ODMs also provide hardware for HP and Dell, and are getting the same business without having to deal with the middlemen.

More than ever before, Chinese server makers like Huawei are factoring into the worldwide server market in a big way. ODMs like Quanta and SuperMicro have the same story. In fact, the ODM Wistron now has a U.S. subsidiary called Wiwynn specifically so server buyers in the United States have an easier way to buy servers from manufacturers directly.

For the majority of the past four years, Diane Bryant has actually been working as the CIO of Intel, but in January of this year, she went back to heading up the data center team as vice president and general manager. Originally, the group was called the server group, but in Bryant's absence, the name was changed to the data center and connected systems group. Now they didn't handle just server chips but storage and networking devices as well. And of course, as previously mentioned, the biggest change was that HP, Dell, and IBM were no longer the major players in chip sales.

Note: More and more hosting companies are cropping up, but not all of them have secure and stable data centers. Take a virtual tour of the NetHosting data center to confirm our dedication to your privacy and security.

Despite Intel's numbers, an HP representative commented and said that the research firm IDC collected server stats that put IBM, Dell, and HP combined at 73.9 percent of the server market. The edge that sales figures from Intel have is that they give a limited glimpse at Google's activities (which are very well hidden), and Intel also sells chips to ODMs which IDC's numbers don't account for. No matter whose numbers are right and whose are wrong, the big three are certainly working hard to reinvent their businesses. Dell has a new business branch (Dell Data Center Services) dedicated only to building custom servers for big web companies. All three are starting to offer cloud services now as well. It seems like too little, too late, but time will tell if any of the big three can bounce back to be a top dog once again.

To read more about HP's attempt to bounce back and be competitive in the changing hosting market, check out our blog post about HP laying off a sizable percentage of its employees to make way for a new cloud computing focus at the company.

 

Performance per watt

Performance per watt


In computing, performance per watt is a measure of the energy efficiency of a particular computer architecture or computer hardware. Literally, it measures the rate of computation that can be delivered by a computer for every watt of power consumed.

System designers building parallel computers, such as Google's hardware, pick CPUs based on their performance per watt of power, because the cost of powering the CPU outweighs the cost of the CPU itself.[1]

Contents

Definition

The performance and power consumption metrics used depend on the definition; reasonable measures of performance are FLOPS, MIPS, or the score for any performance benchmark. Several measures of power usage may be employed, depending on the purposes of the metric; for example, a metric might only consider the electrical power delivered to a machine directly, while another might include all power necessary to run a computer, such as cooling and monitoring systems. The power measurement is often the average power used while running the benchmark, but other measures of power usage may be employed (e.g. peak power, idle power).

For example, the early UNIVAC I computer performed approximately 0.015 operations per watt-second (performing 1,905 operations per second (OPS), while consuming 125 kW). The Fujitsu FR-V VLIW/vector processor system on a chip in the 4 FR550 core variant released 2005 performs 51 Giga-OPS with 3 watts of power consumption resulting in 17 billion operations per watt-second.[2][3] This is an improvement by over a trillion times in 54 years.

Most of the power a computer uses is converted into heat, so a system that takes fewer watts to do a job will require less cooling to maintain a given operating temperature. Reduced cooling demands makes it easier to quiet a computer. Lower energy consumption can also make it less costly to run, and reduce the environmental impact of powering the computer (see green computing). If installed where there is limited climate control, a lower power computer will operate at a lower temperature, which may make it more reliable. In a climate controlled environment, reductions in direct power use may also create savings in climate control energy.

Computing energy consumption is sometimes also measured by reporting the energy required to run a particular benchmark, for instance EEMBC EnergyBench. Energy consumption figures for a standard workload may make it easier to judge the effect of an improvement in energy efficiency.

Performance (in operations/second) per watt can also be written as operations/watt-second, or operations/joule, since 1 watt = 1 joule/second.

FLOPS per watt

Exponential growth of supercomputer performance per watt based on data from the Green500 list. The red crosses denote the most power efficient computer, while the blue ones denote the computer ranked#500.

FLOPS (Floating Point Operations Per Second) per watt is a common measure. Like the FLOPS it is based on, the metric is usually applied to scientific computing and simulations involving many floating point calculations.

Examples

As of June 2012, the Green500 list rates BlueGene/Q, Power BQC 16C as the most efficient supercomputer on the TOP500 in terms of FLOPS per watt, running at 2,100.88 MFLOPS/watt.[4]

However in early 2014, NVIDIA released the Tegra K1 mobile SOC containing a GPU with over 326 GFLOPS peak perf[5] at roughly 10 Watts,[6] obtaining over 50,000 MFLOPS/watt and thus is roughly 25x more efficient than even the Blue Gene/Q!

On 9 June 2008, CNN reported that IBM's Roadrunner supercomputer achieves 376 MFLOPS/watt.[7][8]

In November 2010, IBM machine, Blue Gene/Q achieves 1,684 MFLOPS/watt.[9][10]

As part of Intel's Tera-Scale research project, the team produced an 80 core CPU that can achieve over 16,000 MFLOPS/watt.[11][12] The future of that CPU is not certain.

Microwulf, a low cost desktop Beowulf cluster of 4 dual core Athlon 64 x2 3800+ computers, runs at 58 MFLOPS/watt.[13]

Kalray has developed a 256 core VLIW CPU that achieves 25 GFLOPS/watt. Next generation is expected to achieve 75 GFLOPS/watt.[14]

Green500 List

The Green500 list ranks computers from the TOP500 list of supercomputers in terms of energy efficiency. Typically measured as LINPACK FLOPS per watt.[15][16]

As of November 2014, the L-CSC supercomputer of the Helmholtz Association at the GSI in Darmstadt Germany tops the current Green500 list with 5271 MFLOPS/W and was the first cluster to surpass an efficiency of 5 GFLOPS/W. It runs on Intel Xeon E5-2690 Processors with the Intel Ivy Bridge Architecture and AMD FirePro™ S9150 GPU Accellerators. It uses in rack watercooling and Cooling Towers to reduce the energy required for cooling. [17]

As of June 2013, the Eurotech supercomputer Eurora at Cineca tops the current Green500 list with 3208 LINPACK MFLOPS/W.[18] The Cineca Eurora supercomputer is equipped with two Intel Xeon E5-2687W CPUs and two PCI-e connected NVIDIA Tesla K20 accelerators per node. Water cooling and electronics design allows for very high densities to be reached with a peak performance of 350 TFlop/s per rack.[19]

As of November 2012, an Appro International, Inc. Xtreme-X supercomputer (Beacon) tops the current Green500 list with 2499 LINPACK MFLOPS/W.[20] Beacon is deployed by NICS of the University of Tennessee and is a GreenBlade GB824M, Xeon E5-2670 based, eight cores (8C), 2.6 GHz, Infiniband FDR, Intel Xeon Phi 5110P computer.[19]

GPU efficiency

Graphics processing units (GPU) have continued to increase in energy usage, while CPUs designers have recently focused on improving performance per watt. High performance GPUs may draw large amount of power and hence, intelligent techniques are required to manage GPU power consumption.[21] Measures like 3DMark2006 score per watt can help identify more efficient GPUs.[22] However that may not adequately incorporate efficiency in typical use, where much time is spent doing less demanding tasks.[23]

With modern GPUs, energy usage is an important constraint on the possible power. GPU designs are usually highly scalable, allowing the manufacturer to put multiple chips on the same video card, or to use multiple video cards that work in parallel. Peak performance of any system is essentially limited by the amount of power it can draw and the amount of heat it can dissipate. Consequently, performance per watt of a GPU design translates directly into peak performance of a system that uses that design.

Since GPUs may also be used for some general purpose computation, sometimes their performance is measured in terms also applied to CPUs, such as FLOPS per watt.

Challenges

While performance per watt is useful, absolute power requirements are also important. Claims of improved performance per watt may be used to mask increasing power demands. For instance, though newer generation GPU architectures may provide better performance per watt, continued performance increases can negate the gains in efficiency, and the GPUs continue to consume large amounts of power.[24]

Benchmarks that measure power under heavy load may not adequately reflect typical efficiency. For instance, 3DMark stresses the 3D performance of a GPU, but many computers spend most of their time doing less intense display tasks (idle, 2D tasks, displaying video). So the 2D or idle efficiency of the graphics system may be at least as significant for overall energy efficiency. Likewise, systems that spend much of their time in standby or soft off are not adequately characterized by just efficiency under load. To help address this some benchmarks, like SPECpower, include measurements at a series of load levels.[25]

The efficiency of some electrical components, such as voltage regulators, decreases with increasing temperature, so the power used may increase with temperature. Power supplies, motherboards, and some video cards are some of the subsystems affected by this. So their power draw may depend on temperature, and the temperature or temperature dependence should be noted when measuring.[26][27]

Performance per watt also typically does not include full life-cycle costs. Since computer manufacturing is energy intensive, and computers often have a relatively short lifespan, energy and materials involved in production, distribution, disposal and recycling often make up significant portions of their cost, energy use, and environmental impact.[28][29]

Energy required for climate control of the computer's surroundings is often not counted in the wattage calculation, but can be significant.[30]

Other energy efficiency measures

SWaP (space, wattage and performance) is a Sun Microsystems metric for data centers, incorporating energy and space.

SWaP = Performance / (Space × Power)

Where performance is measured by any appropriate benchmark, and space is size of the computer.[31]


Mechanical hard drives IO operations Per Second


Mechanical hard drives

Some commonly accepted averages for random IO operations, calculated as 1/(seek + latency) = IOPS:

Device Type IOPS Interface Notes 7,200 rpm SATA drives HDD ~75-100 IOPS[2] SATA 3 Gbit/s 10,000 rpm SATA drives HDD ~125-150 IOPS[2] SATA 3 Gbit/s 10,000 rpm SAS drives HDD ~140 IOPS[2] SAS 15,000 rpm SAS drives HDD ~175-210 IOPS[2] SAS

IOPS (Input/Output Operations Per Second

IOPS

From Wikipedia, the free encyclopedia

IOPS (Input/Output Operations Per Second, pronounced eye-ops) is a common performance measurement used to benchmark computer storage devices like hard disk drives (HDD), solid state drives (SSD), and storage area networks (SAN). As with any benchmark, IOPS numbers published by storage device manufacturers do not guarantee real-world application performance.[1][2]

IOPS can be measured with applications, such as Iometer (originally developed by Intel), as well as IOzone and FIO[3] and is primarily used with servers to find the best storage configuration.

The specific number of IOPS possible in any system configuration will vary greatly, depending upon the variables the tester enters into the program, including the balance of read and write operations, the mix of sequential and random access patterns, the number of worker threads and queue depth, as well as the data block sizes.[1] There are other factors which can also affect the IOPS results including the system setup, storage drivers, OS background operations, etc. Also, when testing SSDs in particular, there are preconditioning considerations that must be taken into account.[4]

Contents

Performance characteristics

Random access compared to sequential access.

The most common performance characteristics measured are sequential and random operations. Sequential operations access locations on the storage device in a contiguous manner and are generally associated with large data transfer sizes, e.g., 128 KB. Random operations access locations on the storage device in a non-contiguous manner and are generally associated with small data transfer sizes, e.g., 4 KB.

The most common performance characteristics are as follows:

Measurement

Description

Total IOPS

Total number of I/O operations per second (when performing a mix of read and write tests)

Random Read IOPS

Average number of random read I/O operations per second

Random Write IOPS

Average number of random write I/O operations per second

Sequential Read IOPS

Average number of sequential read I/O operations per second

Sequential Write IOPS

Average number of sequential write I/O operations per second

For HDDs and similar electromechanical storage devices, the random IOPS numbers are primarily dependent upon the storage device's random seek time, whereas for SSDs and similar solid state storage devices, the random IOPS numbers are primarily dependent upon the storage device's internal controller and memory interface speeds. On both types of storage devices the sequential IOPS numbers (especially when using a large block size) typically indicate the maximum sustained bandwidth that the storage device can handle.[1] Often sequential IOPS are reported as a simple MB/s number as follows:

(with the answer typically converted to MegabytesPerSec)

Some HDDs will improve in performance as the number of outstanding IO's (i.e. queue depth) increases. This is usually the result of more advanced controller logic on the drive performing command queuing and reordering commonly called either Tagged Command Queuing (TCQ) or Native Command Queuing (NCQ). Most commodity SATA drives either cannot do this, or their implementation is so poor that no performance benefit can be seen.[citation needed] Enterprise class SATA drives, such as the Western Digital Raptor and Seagate Barracuda NL will improve by nearly 100% with deep queues.[5] High-end SCSI drives more commonly found in servers, generally show much greater improvement, with the Seagate Savvio exceeding 400 IOPS—more than doubling its performance.[citation needed]

While traditional HDDs have about the same IOPS for read and write operations, most NAND flash-based SSDs are much slower writing than reading due to the inability to rewrite directly into a previously written location forcing a procedure called garbage collection.[6][7][8] This has caused hardware test sites to start to provide independently measured results when testing IOPS performance.

Newer flash SSD drives such as the Intel X25-E have much higher IOPS than traditional hard disk drives. In a test done by Xssist, using IOmeter, 4 KB random transfers, 70/30 read/write ratio, queue depth 4, the IOPS delivered by the Intel X25-E 64 GB G1 started around 10000 IOPs, and dropped sharply after 8 minutes to 4000 IOPS, and continued to decrease gradually for the next 42 minutes. IOPS vary between 3000 to 4000 from around the 50th minutes onwards for the rest of the 8+ hours test run.[9] Even with the drop in random IOPS after the 50th minute, the X25-E still has much higher IOPS compared to traditional hard disk drives. Some SSDs, including the OCZ RevoDrive 3 x2 PCIe using the SandForce controller, have shown much higher sustained write performance that more closely matches the read speed.[10]

Examples

Mechanical hard drives

Some commonly accepted averages for random IO operations, calculated as 1/(seek + latency) = IOPS:

Device

Type

IOPS

Interface

Notes

7,200 rpm SATA drives

HDD

~75-100 IOPS[2]

SATA 3 Gbit/s


10,000 rpm SATA drives

HDD

~125-150 IOPS[2]

SATA 3 Gbit/s


10,000 rpm SAS drives

HDD

~140 IOPS[2]

SAS


15,000 rpm SAS drives

HDD

~175-210 IOPS[2]

SAS


Solid-state devices

Device

Type

IOPS

Interface

Notes

Intel X25-M G2 (MLC)

SSD

~8,600 IOPS[11]

SATA 3 Gbit/s

Intel's data sheet[12] claims 6,600/8,600 IOPS (80 GB/160 GB version) and 35,000 IOPS for random 4 KB writes and reads, respectively.

Intel X25-E (SLC)

SSD

~5,000 IOPS[13]

SATA 3 Gbit/s

Intel's data sheet[14] claims 3,300 IOPS and 35,000 IOPS for writes and reads, respectively. 5,000 IOPS are measured for a mix. Intel X25-E G1 has around 3 times higher IOPS compared to the Intel X25-M G2.[15]

G.Skill Phoenix Pro

SSD

~20,000 IOPS[16]

SATA 3 Gbit/s

SandForce-1200 based SSD drives with enhanced firmware, states up to 50,000 IOPS, but benchmarking shows for this particular drive ~25,000 IOPS for random read and ~15,000 IOPS for random write.[16]

OCZ Vertex 3

SSD

Up to 60,000 IOPS[17]

SATA 6 Gbit/s

Random Write 4 KB (Aligned)

Corsair Force Series GT

SSD

Up to 85,000 IOPS[18]

SATA 6 Gbit/s

240 GB Drive, 555 MB/s sequential read & 525 MB/s sequential write, Random Write 4 KB Test (Aligned)

Samsung SSD 850 PRO

SSD

100,000 read IOPS
90,000 write IOPS[19]

SATA 6 Gbit/s

4 KB aligned random I/O at QD32
10,000 read IOPS, 36,000 write IOPS at QD1
550 MB/s sequential read, 520 MB/s sequential write on 256 GB and larger models
550 MB/s sequential read, 470 MB/s sequential write on 128 GB model[19]

OCZ Vertex 4

SSD

Up to 120,000 IOPS[20]

SATA 6 Gbit/s

256 GB Drive, 560 MB/s sequential read & 510 MB/s sequential write, Random Read 4 KB Test 90K IOPS, Random Write 4 KB Test 85K IOPS

(IBM) Texas Memory Systems RamSan-20

SSD

120,000+ Random Read/Write IOPS[21]

PCIe

Includes RAM cache

Fusion-io ioDrive

SSD

140,000 Read IOPS, 135,000 Write IOPS[22]

PCIe


Virident Systems tachIOn

SSD

320,000 sustained READ IOPS using 4KB blocks and 200,000 sustained WRITE IOPS using 4KB blocks[23]

PCIe


OCZ RevoDrive 3 X2

SSD

200,000 Random Write 4K IOPS[24]

PCIe


Fusion-io ioDrive Duo

SSD

250,000+ IOPS[25]

PCIe


Violin Memory Violin 3200

SSD

250,000+ Random Read/Write IOPS[26]

PCIe /FC/Infiniband/iSCSI

Flash Memory Array

WHIPTAIL, ACCELA

SSD

250,000/200,000+ Write/Read IOPS[27]

Fibre Channel, iSCSI, Infiniband/SRP, NFS, CIFS

Flash Based Storage Array

DDRdrive X1,

SSD

300,000+ (512B Random Read IOPS) and 200,000+ (512B Random Write IOPS)[28][29][30][31]

PCIe


SolidFire SF3010/SF6010

SSD

250,000 4KB Read/Write IOPS[32]

iSCSI

Flash Based Storage Array (5RU)

(IBM) Texas Memory Systems RamSan-720 Appliance

FLASH/DRAM

500,000 Optimal Read, 250,000 Optimal Write 4KB IOPS[33]

FC / InfiniBand


OCZ Single SuperScale Z-Drive R4 PCI-Express SSD

SSD

Up to 500,000 IOPS[34]

PCIe


WHIPTAIL, INVICTA

SSD

650,000/550,000+ Read/Write IOPS[35]

Fibre Channel, iSCSI, Infiniband/SRP, NFS

Flash Based Storage Array

Violin Memory Violin 6000

3RU Flash Memory Array

1,000,000+ Random Read/Write IOPS[36]

/FC/Infiniband/10Gb(iSCSI)/ PCIe


(IBM) Texas Memory Systems RamSan-630 Appliance

Flash/DRAM

1,000,000+ 4KB Random Read/Write IOPS[37]

FC / InfiniBand


IBM FlashSystem 840

Flash/DRAM

1,100,000+ 4KB Random Read/600,000 4KB Write IOPS[38]

8G FC / 16G FC / 10G FCoE / InfiniBand

Modular 2U Storage Shelf - 4TB-48TB

Fusion-io ioDrive Octal (single PCI Express card)

SSD

1,180,000+ Random Read/Write IOPS[39]

PCIe


OCZ 2x SuperScale Z-Drive R4 PCI-Express SSD

SSD

Up to 1,200,000 IOPS[34]

PCIe


(IBM)Texas Memory Systems RamSan-70

Flash/DRAM

1,200,000 Random Read/Write IOPS[40]

PCIe

Includes RAM cache

Kaminario K2

Flash/DRAM/Hybrid SSD

Up to 1,200,000 IOPS SPC-1 IOPS with the K2-D (DRAM)[41][42]

FC


NetApp FAS6240 cluster

Flash/Disk

1,261,145 SPECsfs2008 nfsv3 IOPs using 1,440 15K disks, across 60 shelves, with virtual storage tiering.[43]

NFS, CIFS, FC, FCoE, iSCSI

SPECsfs2008 is the latest version of the Standard Performance Evaluation Corporation benchmark suite measuring file server throughput and response time, providing a standardized method for comparing performance across different vendor platforms. http://www.spec.org/sfs2008.

Fusion-io ioDrive2

SSD

Up to 9,608,000 IOPS[44]

PCIe

Only via demonstration so far.

 

HP Active Health System

The complexity of a data center, with many system configurations and many people involved, can lead to complex system problems. Today, troubleshooting these complex problems can be a tedious, manual process that can take days or weeks to find a root cause. Administrators need cutting-edge diagnostic tools to more quickly find and fix their toughest issues.

HP Active Health System is an essential component of the HP iLO Management Engine. It is an industry-first technology that provides continuous, proactive health monitoring of over 1600 system parameters. In addition, 100% of configuration changes are logged for more accurate problem resolution. This information enables you to start problem analysis 5 times faster and spend less time with support reproducing or describing errors. In addition, the consolidated health and service alerts with precise time stamping are synchronized to improve root cause diagnosis across systems and solutions. With this advanced system telemetry you can accurately troubleshoot and resolve problems faster. And because it's completely agentless, Active Health doesn't impact application performance.
All information collected by Active Health is logged securely, isolated from the operating system and separate from any customer data. In 3-4 minutes, you can securely export the Active Health file to an HP Support professional to help you resolve your issue faster and more accurately.

Active Health System is the 24/7 Mission Control for your server. With faster diagnostic data collection and the richest, most relevant data, it is the fastest way to get your system back online and keep it running optimally.

Stop System Hijackers with Intel OS Guard

Stop System Hijackers with Intel OS Guard


If an attacker can make your system do whatever he wants, you can be in deep trouble. Escalation-of-privilege attacks use vulnerabilities in your operating system to place the processor in supervisor mode which is meant to be reserved for highly trusted kernel code. When in supervisor mode, the processor may perform any operation allowed by its architecture. Any instruction may be executed, any I/O operation initiated, any area of memory accessed—unless your system is protected by Intel Device Protection Technology with OS Guard (Intel OS Guard).

Malware typically enters a system through application memory by compromising a user application or tricking a user into installing the malware. Intel OS Guard, built in to certain Intel Core processors, Intel Atom processors, and Intel Xeon processors and automatically enabled on supported systems, offers two types of protection against escalation-of-privilege attacks:

  • Malware execution protection. Prevents malware from executing code in application memory space by instructing the processor to not execute any code that comes from application memory while the processor is in supervisor mode.
  • User data access protection. Prevents malware from accessing data in user pages by instructing the processor to block access to application memory while the processor is in supervisor mode.

 

There should be no legitimate reason for the processor to be in supervisor mode when it runs code from application memory, and with Intel OS Guard, the processor can block the execution of any code that resides in application memory while the processor is in supervisor mode. Because malware resides in application memory, Intel OS Guard can keep it from running code in supervisor mode which can prevent malware from performing operations reserved for the kernel.

Likewise, there are rarely valid reasons for the processor to be in supervisor mode while data in application memory is being read or written, and with Intel OS Guard, the processor blocks access to data in application memory. For unusual cases where accessing user data in application memory needs to be done in supervisor mode, this Intel OS Guard protection can be carefully and temporarily turned off.

Unified Extensible Firmware Interface

Unified Extensible Firmware Interface

From Wikipedia, the free encyclopedia

This article's lead section may not adequately summarize key points of its contents. Please consider expanding the lead to provide an accessible overview of all important aspects of the article. (December 2014)

UEFI Logo

Extensible Firmware Interface's position in the software stack.

The Unified Extensible Firmware Interface (UEFI, pronounced as an initialism U-E-F-I or like "unify" without the n[a]) is a specification that defines a software interface between an operating system and platform firmware. UEFI is meant to replace the Basic Input/Output System (BIOS) firmware interface, originally present in all IBM PC-compatible personal computers.[2][3] In practice, most UEFI firmware images provide legacy support for BIOS services. UEFI can support remote diagnostics and repair of computers, even without another operating system.[4]

Intel developed the original EFI (Extensible Firmware Interface) specification. Some of the EFI's practices and data formats mirror those from Microsoft Windows.[5][6] In 2005, UEFI deprecated EFI 1.10 (the final release of EFI). The Unified EFI Forum manages the UEFI specification.

History

The original motivation for EFI came during early development of the first Intel–HP Itanium systems in the mid-1990s. BIOS limitations (such as 16-bit processor mode, 1 MB addressable space and PC AT hardware) were unacceptable for the larger server platforms Itanium was targeting.[7] The effort to address these concerns began in 1998 and was initially called Intel Boot Initiative;[8] it was later renamed to EFI.[9][10]

In July 2005, Intel ceased development of the EFI specification at version 1.10, and contributed it to the Unified EFI Forum, which has evolved the specification as the Unified Extensible Firmware Interface (UEFI). The original EFI specification remains owned by Intel, which exclusively provides licenses for EFI-based products, but the UEFI specification is owned by the Forum.[7][11]

Version 2.1 of the UEFI (Unified Extensible Firmware Interface) specification was released on 7 January 2007. It added cryptography, network authentication and the User Interface Architecture (Human Interface Infrastructure in UEFI). The current UEFI specification, version 2.4, was approved in July 2013.

Advantages

Interaction between the EFI boot manager and EFI drivers

The interface defined by the EFI specification includes data tables that contain platform information, and boot and runtime services that are available to the OS loader and OS. UEFI firmware provides several technical advantages over a traditional BIOS system:[12]

  • Ability to boot from large disks (over 2 TB) with a GUID Partition Table (GPT)[13][b]
  • CPU-independent architecture[b]
  • CPU-independent drivers[b]
  • Flexible pre-OS environment, including network capability
  • Modular design

Compatibility

Processor compatibility

As of version 2.4, processor bindings exist for Itanium, x86, x86-64, ARM (AArch32) and ARM64 (AArch64).[14] Only little-endian processors can be supported.[15]

A normal PC BIOS is limited to a 16-bit processor mode and 1 MB of addressable space due to the design being based on the IBM 5150, which used the 16-bit Intel 8088.[7][16] In comparison, the processor mode in a UEFI environment can be either 32-bit (x86-32, AArch32) or 64-bit (x86-64, Itanium, and AArch64).[7][17] 64-bit UEFI firmware implementations understand long mode, which allows applications in the pre-boot execution environment to have direct access to all of the memory using 64-bit addressing.[18]

UEFI requires the firmware and operating system loader (or kernel) to be size-matched; for example, a 64-bit UEFI firmware implementation can only load a 64-bit UEFI operating system boot loader or kernel. After the system transitions from "Boot Services" to "Runtime Services", the operating system kernel takes over. At this point, the kernel can change processor modes if it desires, but this bars usage of the runtime services (unless the kernel switches back again).[19]:sections 2.3.2 and 2.3.4 As of version 3.15, Linux kernel supports booting of 64-bit kernels on 32-bit UEFI firmware implementations running on x86-64 CPUs, with UEFI handover support from a UEFI boot loader as the requirement.[20] UEFI handover protocol deduplicates the UEFI initialization code between the kernel and UEFI boot loaders, leaving the initialization to be performed only by the Linux kernel's UEFI boot stub.[21][22]

Disk device compatibility

See also: GPT § Operating systems support and Protective MBR

In addition to the standard PC disk partition scheme, which uses a master boot record (MBR), UEFI works with a new partitioning scheme: GUID Partition Table (GPT). GPT is free from many of the limitations of MBR. In particular, the MBR limits on the number and size of disk partitions (up to 4 primary partitions per disk, up to 2 TB (2 × 240 bytes) per disk) are relaxed.[23] GPT allows for a maximum disk and partition size of 8 ZB (8 × 270 bytes).[23][24]

The UEFI specification explicitly requires support for FAT32 for EFI System partitions (ESPs), and FAT16 or FAT12 for removable media;[19]:section 12.3 specific implementations may support other file systems.

Linux

See also: EFI System partition and Linux

Support for GPT in Linux is enabled by turning on the option CONFIG_EFI_PARTITION (EFI GUID Partition Support) during kernel configuration.[25] This option allows Linux to recognize and use GPT disks after the system firmware passes control over the system to Linux.

For reverse compatibility, Linux can use GPT disks in BIOS-based systems for both data storage and booting, as both GRUB 2 and Linux are GPT-aware. Such a setup is usually referred to as BIOS-GPT.[26] As GPT incorporates the protective MBR, a BIOS-based computer can boot from a GPT disk using GPT-aware boot loader stored in the protective MBR's bootstrap code area.[24] In case of GRUB, such a configuration requires a BIOS Boot partition for GRUB to embed its second-stage code due to absence of the post-MBR gap in GPT partitioned disks (which is taken over by the GPT's Primary Header and Primary Partition Table). Commonly 1 MiB in size, this partition's Globally Unique Identifier in GPT scheme is 21686148-6449-6E6F-744E-656564454649 and it is used by GRUB only in BIOS-GPT setups. From the GRUB's perspective, no such partition type exists in case of MBR partitioning. This partition is not required if the system is UEFI based, as there is no such embedding of the second-stage code in that case.[13][24][26]

UEFI systems can access GPT disks and directly boot from them, simplifying things and allowing UEFI boot methods for Linux. Booting Linux from GPT disks on UEFI systems involves creation of an EFI System partition (ESP), which contains UEFI applications such as bootloaders, operating system kernels, and utility software.[27][28][29] Such a setup is usually referred to as UEFI-GPT, while ESP is recommended to be at least 512 MiB in size and formatted with a FAT32 filesystem for maximum compatibility.[24][26][30]

For backwards compatibility, most of the UEFI implementations also support booting from MBR-partitioned disks, through the Compatibility Support Module (CSM) which provides legacy BIOS compatibility.[31] In that case, booting Linux on UEFI systems is the same as on legacy BIOS-based systems.

Microsoft Windows

The 64-bit versions of Microsoft Windows Vista[32] and later, 32-bit versions of Windows 8, and the Itanium versions of Windows XP and Server 2003 can boot from disks with a partition size larger than 2 TB.

Features

Services

EFI defines two types of services: boot services and runtime services. Boot services are only available while the firmware owns the platform (before the ExitBootServices call). Boot services include text and graphical consoles on various devices, and bus, block and file services. Runtime services are still accessible while the operating system is running; they include services such as date, time and NVRAM access.

In addition, the Graphics Output Protocol (GOP) provides limited runtime services support; see also Graphics features section below. The operating system is permitted to directly write to the framebuffer provided by GOP during runtime mode. However, the ability to change video modes is lost after transitioning to runtime services mode until the OS graphics driver is loaded.

Variable services

UEFI variables provide a way to store data, in particular non-volatile data, that is shared between platform firmware and operating systems or UEFI applications. Variable namespaces are identified by GUIDs, and variables are key/value pairs. For example, variables can be used to keep crash messages in NVRAM after a crash for the operating system to retrieve after a reboot.[33]

Time services

UEFI provides device-independent time services. Time services include support for timezone and daylight saving fields, which allow the hardware real-time clock to be set to local time or UTC.[34] On machines using a PC-AT real-time clock, the clock still has to be set to local time for compatibility with BIOS-based Windows.[6]

Applications

Independently of loading an operating system, UEFI has the ability to run standalone UEFI applications, which can be developed and installed independently of the system manufacturer. UEFI applications reside as files on the ESP and can be started directly by the firmware's boot manager, or by other UEFI applications. One class of the UEFI applications are the operating system loaders, such as rEFInd, Gummiboot, and Windows Boot Manager; they start a specific operating system and optionally provide a user interface for the selection of another UEFI application to run. Utilities like the UEFI shell are also UEFI applications.

Protocols

EFI defines protocols as a set of software interfaces used for communication between two binary modules. All EFI drivers must provide services to others via protocols.

Device drivers

In addition to standard architecture-specific device drivers, the EFI specification provides for a processor-independent device driver environment, called EFI byte code or EBC. System firmware is required by the UEFI specification to carry an interpreter for any EBC images that reside in or are loaded into the environment. In that sense, EBC is similar to Open Firmware, the hardware-independent firmware used in PowerPC-based Apple Macintosh and Sun Microsystems SPARC computers, among others.

Some architecture-specific (non-EBC) EFI device driver types can have interfaces for use from the operating system. This allows the OS to rely on EFI for basic graphics and network functions until OS specific drivers are loaded.

Graphics features

The EFI specification defined a UGA (Universal Graphic Adapter) protocol as a way to support device-independent graphics. UEFI did not include UGA and replaced it with GOP (Graphics Output Protocol), with the explicit goal of removing VGA hardware dependencies. The two are similar.[35]

UEFI 2.1 defined a "Human Interface Infrastructure" (HII) to manage user input, localized strings, fonts, and forms (in the HTML sense). These enable original equipment manufacturers (OEMs) or independent BIOS vendors (IBVs) to design graphical interfaces for pre-boot configuration; UEFI itself does not define a user interface.

Most early UEFI firmware implementations were console-based, but as early as 2007 some implementations featured a graphical user interface.[36]

EFI System partition

Main article: EFI System partition

EFI System partition, often abbreviated to ESP, is a data storage device partition that is used in computers adhering to the UEFI specification. Accessed by the UEFI firmware when a computer is powered up, it stores UEFI applications and the files these applications need to run, including operating system kernels. Supported partition table schemes include MBR and GPT, as well as El Torito volumes on optical disks.[19]:section 2.6.2 For the use on ESPs, UEFI defines a specific version of the FAT file system, which encompasses FAT32 file systems on ESPs, and FAT16 and FAT12 on removable media.[19]:section 12.3 The ESP provides space for a boot sector as part of the BIOS backward compatibility.[31]

Booting

UEFI booting

Unlike BIOS, UEFI does not rely on a boot sector, defining instead a boot manager as part of the UEFI specification. When a computer is powered on, the boot manager checks the boot configuration and, based on its settings, loads and executes the specified operating system loader or operating system kernel. The boot configuration is a set of global-scope variables stored in NVRAM, including the boot variables that indicate the paths to operating system loaders or kernels, which as a component class of UEFI applications are stored as files on the firmware-accessible EFI System partition (ESP).

Operating system loaders can also be automatically detected by an UEFI implementation, what enables easy booting from removable devices such as USB flash drives. This automated detection relies on a standardized file path to the operating system loader, with the path depending on the computer architecture. Format of the file path is defined as <EFI_SYSTEM_PARTITION>/BOOT/BOOT<MACHINE_TYPE_SHORT_NAME>.EFI; for example, on an x86-64 computer the path is /efi/BOOT/BOOTX64.EFI.[19]

Booting UEFI systems from GPT-partitioned disks is commonly called UEFI-GPT booting. Additionally, it is common for an UEFI implementation to include a user interface to the boot manager, allowing the user to manually select the desired operating system (or system utility) from the list of available boot options and load it.

CSM booting

For backwards compatibility, most of the UEFI firmware implementations on PC-class machines also support booting in legacy BIOS mode from MBR-partitioned disks, through the Compatibility Support Module (CSM) which provides legacy BIOS compatibility. In that scenario, booting is performed in the same way as on legacy BIOS-based systems, by ignoring the partition table and relying on the content of a boot sector.[31]

BIOS booting from MBR-partitioned disks is commonly called BIOS-MBR, regardless of it being performed on UEFI or legacy BIOS-based systems. As a side note, booting legacy BIOS-based systems from GPT disks is also possible, and it is commonly called BIOS-GPT.

Despite the fact MBR partition tables are required to be fully supported within the UEFI specification,[19] some UEFI firmwares immediately switch to the BIOS-based CSM booting depending on the type of boot disk's partition table, thus preventing UEFI booting to be performed from EFI System partitions on MBR-partitioned disks.[31] Such a scheme is commonly called UEFI-MBR.

Network booting

UEFI specification includes support for booting over network through the Preboot eXecution Environment (PXE). Underlying network protocols include Internet Protocol (IPv4 and IPv6), User Datagram Protocol (UDP), Dynamic Host Configuration Protocol (DHCP) and Trivial File Transfer Protocol (TFTP).[19][37]

Also included is support for boot images remotely stored on storage area networks (SANs), with Internet Small Computer System Interface (iSCSI) and Fibre Channel over Ethernet (FCoE) as supported protocols for accessing the SANs.[19][38][39]

Secure boot

See also: Secure boot criticism

The UEFI 2.2 specification adds a protocol known as secure boot, which can secure the boot process by preventing the loading of drivers or OS loaders that are not signed with an acceptable digital signature. When secure boot is enabled, it is initially placed in "setup" mode, which allows a public key known as the "Platform key" (PK) to be written to the firmware. Once the key is written, secure boot enters "User" mode, where only drivers and loaders signed with the platform key can be loaded by the firmware. Additional "Key Exchange Keys" (KEK) can be added to a database stored in memory to allow other certificates to be used, but they must still have a connection to the private portion of the Platform key.[40] Secure boot can also be placed in "Custom" mode, where additional public keys can be added to the system that do not match the private key.[41]

Secure boot is supported by Windows 8, Windows Server 2012, FreeBSD, and a number of Linux distributions including Fedora, OpenSuse, and Ubuntu.[42]

Compatibility Support Module

The Compatibility Support Module (CSM) is a component of the UEFI firmware that provides legacy BIOS compatibility by emulating a BIOS environment, allowing legacy operating systems and some option ROMs that do not support UEFI to still be used.[43]

CSM also provides required legacy System Management Mode (SMM) functionality, called CompatibilitySmm, as an addition to features provided by the UEFI SMM. This is optional, and highly chipset and platform specific. An example of such a legacy SMM functionality is providing USB legacy support for keyboard and mouse, by emulating their classic PS/2 counterparts.[43]

UEFI shell

UEFI provides a shell environment, which can be used to execute other UEFI applications, including UEFI boot loaders.[29] Apart from that, commands available in the UEFI shell can be used for obtaining various other information about the system or the firmware, including getting the memory map (memmap), modifying boot manager variables (bcfg), running partitioning programs (diskpart), loading UEFI drivers, and editing text files (edit).[44][45][46]

Source code for a UEFI shell can be downloaded from the Intel's TianoCore UDK2010 / EDK2 SourceForge project.[47] Shell v2 works best in UEFI 2.3+ systems and is recommended over the shell v1 in those systems. Shell v1 should work in all UEFI systems.[44][48][49]

Methods used for launching UEFI shell depend on the manufacturer and model of the system motherboard. Some of them already provide a direct option in firmware setup for launching, e.g. compiled x86-64 version of the shell needs to be made available as <EFI_SYSTEM_PARTITION>/SHELLX64.EFI. Some other systems have an already embedded UEFI shell which can be launched by appropriate key press combinations.[50][51] For other systems, the solution is either creating an appropriate USB flash drive or adding manually (bcfg) a boot option associated with the compiled version of shell.[46][50][52][53]

Extensions

Extensions to EFI can be loaded from virtually any non-volatile storage device attached to the computer. For example, an original equipment manufacturer (OEM) can distribute systems with an EFI partition on the hard drive, which would add additional functions to the standard EFI firmware stored on the motherboard's ROM.

Implementation and adoption

Intel EFI

Intel's implementation of EFI is the Intel Platform Innovation Framework, codenamed "Tiano." Tiano runs on Intel's XScale, Itanium and IA-32 processors, and is proprietary software, although a portion of the code has been released under the BSD license or Eclipse Public License (EPL) as TianoCore. TianoCore can be used as a payload for coreboot.[54]

Phoenix Technologies' implementations of UEFI include its SecureCore and SecureCore Tiano products.[55] American Megatrends offers its own UEFI firmware implementation known as Aptio,[56] while Insyde Software offers InsydeH2O, its own implementation of Tiano.[57]

Platforms using EFI/UEFI

Intel's first Itanium workstations and servers, released in 2000, implemented EFI 1.02.

Hewlett-Packard's first Itanium 2 systems, released in 2002, implemented EFI 1.10; they were able to boot Windows, Linux, FreeBSD and HP-UX; OpenVMS added UEFI capability in June, 2003.

In January 2006, Apple Inc. shipped its first Intel-based Macintosh computers. These systems used EFI instead of Open Firmware, which had been used on its previous PowerPC-based systems.[58] On 5 April 2006, Apple first released Boot Camp, which produces a Windows drivers disk and a non-destructive partitioning tool to allow the installation of Windows XP or Vista without requiring a reinstallation of Mac OS X. A firmware update was also released that added BIOS compatibility to its EFI implementation. Subsequent Macintosh models shipped with the newer firmware.[59]

During 2005, more than one million Intel systems shipped with Intel's implementation of UEFI.[60] New mobile, desktop and server products, using Intel's implementation of UEFI, started shipping in 2006. For instance, boards that use the Intel 945 chipset series use Intel's UEFI firmware implementation.

Since 2005, EFI has also been implemented on non-PC architectures, such as embedded systems based on XScale cores.[60]

The EDK (EFI Developer Kit) includes an NT32 target, which allows EFI firmware and EFI applications to run within a Windows application. But no direct hardware access is allowed by EDK NT32. This means only a subset of EFI application and drivers can be executed at the EDK NT32 target.

In 2008, more x86-64 systems adopted UEFI. While many of these systems still allow booting only the BIOS-based OSes via the Compatibility Support Module (CSM) (thus not appearing to the user to be UEFI-based), other systems started to allow booting UEFI-based OSes. For example, IBM x3450 server, MSI motherboards with ClickBIOS, all HP EliteBook Notebook and Tablet PCs, newer HP Compaq Notebook PCs (e.g., 6730b, 6735b, etc.).

In 2009, IBM shipped System x machines (x3550 M2, x3650 M2, iDataPlex dx360 M2) and BladeCenter HS22 with UEFI capability. Dell shipped PowerEdge T610, R610, R710, M610 and M710 servers with UEFI capability. More commercially available systems are mentioned in a UEFI whitepaper.[61]

In 2011, major vendors (such as ASRock, Asus, Gigabyte, and MSI) launched several consumer-oriented motherboards using the Intel 6-series LGA 1155 chipset and AMD 9 Series AM3+ chipsets with UEFI.[62]

With the release of Windows 8 in October 2012, Microsoft's certification requirements now require that computers include firmware that implements the UEFI specification. Furthermore, if the computer supports the "Connected Standby" feature of Windows 8 (which allows devices to have power management comparable to smartphones, with an almost instantaneous return from standby mode), then the firmware is not permitted to contain a Compatibility Support Module (CSM). As such, systems that support Connected Standby are incapable of booting Legacy BIOS operating systems.[63][64]

Operating systems

An operating system that can be booted from a (U)EFI is called a (U)EFI-aware OS, defined by (U)EFI specification. Here the term booted from a (U)EFI means directly booting the system using a (U)EFI OS loader stored on any storage device. The default location for the operating system loader is <EFI_SYSTEM_PARTITION>/BOOT/BOOT<MACHINE_TYPE_SHORT_NAME>.EFI, where short name of the machine type can be IA32, X64, IA64, ARM or AA64.[19] Some operating systems vendors may have their own boot loaders. They may also change the default boot location.

  • The Linux kernel has been able to use EFI at boot time since early 2000,[65] using the elilo EFI boot loader or, more recently, EFI versions of GRUB.[66] Grub+Linux also supports booting from a GUID partition table without UEFI.[13] The distribution Ubuntu added support for UEFI secure boot as of version 12.10.[67] Further, the Linux kernel can be compiled with the option to run as an EFI bootloader on its own through the EFI bootstub feature.
  • HP-UX has used (U)EFI as its boot mechanism on IA-64 systems since 2002.
  • HP OpenVMS has used (U)EFI on IA-64 since its initial evaluation release in December 2003, and for production releases since January 2005.[68]
  • Apple uses EFI for its line of Intel-based Macs. Mac OS X v10.4 Tiger and Mac OS X v10.5 Leopard implement EFI v1.10 in 32-bit mode even on newer 64-bit CPUs, but full support arrived with Mac OS X v10.8 Mountain Lion.[69]
  • The Itanium versions of Windows 2000 (Advanced Server Limited Edition and Datacenter Server Limited Edition) implemented EFI 1.10 in 2002. MS Windows Server 2003 for IA-64, MS Windows XP 64-bit Edition and Windows 2000 Advanced Server Limited Edition, all of which are for the Intel Itanium family of processors, implement EFI, a requirement of the platform through the DIG64 specification.[70]
  • Microsoft introduced UEFI for x86-64 Windows operating systems with Windows Server 2008 and Windows Vista Service Pack 1 so the 64-bit versions of Windows 7 are compatible with EFI. 32-bit UEFI was originally not supported since vendors did not have any interest in producing native 32-bit UEFI firmware because of the mainstream status of 64-bit computing.[71] Windows 8 includes further optimizations for UEFI systems, including a faster startup, 32-bit support, and secure boot support.[72][73]
  • On March 5, 2013, the FreeBSD Foundation awarded a grant to a developer seeking to add UEFI support to the FreeBSD kernel and bootloader.[74] The changes were initially stored in a discrete branch of the FreeBSD source code, but were merged into the mainline source on April 4, 2014 (revision 264095); the changes include support in the installer as well.[75]
  • Oracle Solaris 11.1 and later support UEFI boot for x86 systems with UEFI firmware version 2.1 or later. GRUB 2 is used as the boot loader on x86.[76]

Use of UEFI with virtualization

  • HP Integrity Virtual Machines provides UEFI boot on HP Integrity Servers. It also provides a virtualized UEFI environment for the guest UEFI-aware OSes.
  • Intel hosts an Open Virtual Machine Firmware project on SourceForge.[77]
  • VMware Fusion 3 software for Mac OS X can boot Mac OS X Server virtual machines using EFI. VMware Workstation unofficially supports EFI, but it needs to be manually enabled by editing the vmx file, and as of 2012 Secure Boot is not yet supported.[78] ESXi/vSphere 5.0 officially support UEFI.[79]
  • VirtualBox has implemented UEFI since 3.1,[80] but limited to Unix/Linux operating systems (does not work with Windows Vista x64 and Windows 7 x64).[81][82]
  • QEMU can be used with the Open Virtual Machine Firmware (OVMF) provided by TianoCore.[83]
  • The VMware ESXi version 5 hypervisor, part of VMware vSphere, supports virtualized EFI as an alternative to BIOS inside a virtual machine.
  • Second generation of the Microsoft Hyper-V virtual machine supports virtualized UEFI.[84]

Applications development

EDK2 Application Development Kit (EADK) makes it possible to use standard C library functions in UEFI applications. EADK can be freely downloaded from the Intel's TianoCore UDK2010 / EDK2 SourceForge project. As an example, a port of the Python interpreter is made available as an UEFI application by using the EADK.[85]

A minimalistic "Hello world" C program written using EADK looks similar to its usual C counterpart:

#include <Uefi.h>
#include <Library/UefiLib.h>
#include <Library/ShellCEntryLib.h>
 
EFI_STATUS EFIAPI ShellAppMain(IN UINTN Argc, IN CHAR16 **Argv)
{
    Print(L"hello, world\n");
    return EFI_SUCCESS;
}

Criticism

Numerous digital rights activists have protested against UEFI. Ronald G. Minnich, a co-author of coreboot, and Cory Doctorow, a digital rights activist, have criticized EFI as an attempt to remove the ability of the user to truly control the computer.[86][87] It does not solve any of the BIOS's long-standing problems of requiring two different drivers—one for the firmware and one for the operating system—for most hardware.[88]

Open source project TianoCore also provides the UEFI interfaces.[89] TianoCore lacks the specialized drivers that initialize chipset functions, which are instead provided by Coreboot, of which TianoCore is one of many payload options. The development of Coreboot requires cooperation from chipset manufacturers to provide the specifications needed to develop initialization drivers.

Secure boot

See also: Windows 8 § Reception and Hardware restrictions § Secure boot

In 2011, Microsoft announced that computers certified to run its Windows 8 operating system had to ship with secure boot enabled using a Microsoft private key. Following the announcement, the company was accused by critics and free software/open source advocates (including the Free Software Foundation) of trying to use the secure boot functionality of UEFI to hinder or outright prevent the installation of alternative operating systems such as Linux. Microsoft denied that the secure boot requirement was intended to serve as a form of lock-in, and clarified its requirements by stating that Intel-based systems certified for Windows 8 must allow secure boot to enter custom mode or be disabled, but not on systems using the ARM architecture.[41][90]

Other developers raised concerns about the legal and practical issues of implementing support for secure boot on Linux systems in general. Former Red Hat developer Matthew Garrett noted that conditions in the GNU General Public License version 3 may prevent the use of the GRUB bootloader without a distribution's developer disclosing the private key (however, the Free Software Foundation has since clarified its position, assuring that the responsibility to make keys available was held by the hardware manufacturer),[67] and that it would also be difficult for advanced users to build custom kernels that could function with secure boot enabled without self-signing them.[90] Other developers suggested that signed builds of Linux with another key could be provided, but noted that it would be difficult to persuade OEMs to ship their computers with the required key alongside the Microsoft key.[3]

Several major Linux distributions have developed different implementations for secure boot. Matthew Garrett himself developed a minimal bootloader known as shim; a pre-compiled, signed bootloader that allows the user to individually trust keys provided by distributors.[91] Ubuntu 12.10 uses an older version of shim pre-configured for use with Canonical's own key that verifies only the bootloader and allows unsigned kernels to be loaded; developers believed that the practice of signing only the bootloader is more feasible, since a trusted kernel is effective at securing only the user space, and not the pre-boot state for which secure boot is designed to add protection. That also allows users to build their own kernels and use custom kernel modules as well, without the need to reconfigure the system.[67][92][93] Canonical also maintains its own private key to sign installations of Ubuntu pre-loaded on certified OEM computers that run the operating system, and also plans to enforce a secure boot requirement as well—requiring both a Canonical key and a Microsoft key (for compatibility reasons) to be included in their firmware. Fedora also uses shim, but requires that both the kernel and its modules be signed as well.[92]

It has been disputed whether the kernel and its modules must be signed as well; while the UEFI specifications do not require it, Microsoft has asserted that their contractual requirements do, and that it reserves the right to revoke any certificates used to sign code that can be used to compromise the security of the system.[93] In February 2013, another Red Hat developer attempted to submit a patch to the Linux kernel that would allow it to parse Microsoft's authenticode signing using a master X.509 key embedded in PE files signed by Microsoft. However, the proposal was criticized by Linux creator Linus Torvalds, who attacked Red Hat for supporting Microsoft's control over the secure boot infrastructure.[94]

On March 26, 2013, the Spanish free software development group Hispalinux filed a formal complaint with the European Commission, contending that Microsoft's secure boot requirements on OEM systems were "obstructive" and anti-competitive.[95]

At the Black Hat conference in August 2013, a group of security researchers presented a series of exploits in specific vendor implementations of UEFI that could be used to exploit secure boot.[96]

Windows 10 will allow OEMs to not offer the ability to configure or disable secure boot on x86 systems.[97]

Firmware issues

The increased prominence of UEFI firmware in devices has also led to a number of technical issues blamed on their respective implementations.[98]

Following the release of Windows 8 in late 2012, it was discovered that certain Lenovo computer models with secure boot had firmware that was hardcoded to allow only executables named "Windows Boot Manager" or "Red Hat Enterprise Linux" to load, regardless of any other setting.[99] Other issues were encountered by several Toshiba laptop models with secure boot that were missing certain certificates required for its proper operation.[98]

In January 2013, a bug surrounding the UEFI implementation on some Samsung laptops was publicized, which caused them to be bricked after installing a Linux distribution in UEFI mode. While potential conflicts with a kernel module designed to access system features on Samsung laptops were initially blamed (also prompting kernel maintainers to disable the module on UEFI systems as a safety measure), Matthew Garrett uncovered that the bug was actually triggered by storing too many UEFI variables to memory, and that the bug could also be triggered under Windows as well under special conditions. In conclusion, he determined that the offending kernel module had caused kernel message dumps to be written to the firmware, thus triggering the bug.[33][100][101]