OCP
What is it?
The Open Compute Project (OCP) is a non-profit organization launched in 2011 by founding members Facebook, Intel, Rackspace and Goldman Sachs. It was established with the goal of creating open-source data center hardware, promoting the sharing of product designs and best practices amongst its member companies and with the general public. As stated from its official website, the OCP aims to “…reimagine hardware, making it more efficient, flexible and scalable….with a global community of technology leaders working together to break open the black box of proprietary IT infrastructure to achieve greater choice, customization, and cost savings”.
The OCP holds regular online meetings and discussions amongst its members to share and discuss the design and development of new data center hardware (including servers, storage, switches and data center racks), as well as annual summits around the world where members will gather to share their latest research with the OCP community.
The OCP holds regular online meetings and discussions amongst its members to share and discuss the design and development of new data center hardware (including servers, storage, switches and data center racks), as well as annual summits around the world where members will gather to share their latest research with the OCP community.
Why do you need it?
The benefit of an open-source hardware community is similar to that of open-source software – promoting the establishment of a large group of contributors who work together to constantly develop new, more efficient designs and work to improve existing ones. Members of the OCP community will come from different companies and backgrounds, and therefore can contribute their own unique expertise and experience to collectively improve the product. This shared method of development also helps ensure inter-operability between products from different vendors, giving the customer more flexibility and freedom to mix and match products and solutions from different companies to best meet their computing infrastructure needs.
The main audience for OCP server and rack designs has traditionally been hyperscalers – companies who deploy huge numbers of servers such as Facebook, Google, LinkedIn and Yahoo. OCP server and rack infrastructure has been designed therefore to increase efficiency and decrease costs at scale – for example, by increasing rack space utilization efficiency (a 21” OCP rack is wider than a standard 19” server rack, allowing for more server hardware to be deployed within a single rack) and increasing power supply efficiency (OCP servers are designed to utilize a centralized, separate power supply, reducing the amount of power supplies needed and decreasing management and management costs). Facebook has claimed that its initial deployment of OCP server and rack infrastructure was 38% more energy efficient to build and 24% less expensive to run than the company’s previous facilities.
However, the OCP is also working to develop shared, open-source specifications and designs appropriate for companies that are even deploying only a limited number of servers – for example, the OCP mezzanine slot, a type of expansion card / slot specification for add-on networking / storage cards has been broadly adopted both by a large number of add-on card vendors (such as Mellanox and Broadcom) and server makers (such as GIGABYTE) alike.
The main audience for OCP server and rack designs has traditionally been hyperscalers – companies who deploy huge numbers of servers such as Facebook, Google, LinkedIn and Yahoo. OCP server and rack infrastructure has been designed therefore to increase efficiency and decrease costs at scale – for example, by increasing rack space utilization efficiency (a 21” OCP rack is wider than a standard 19” server rack, allowing for more server hardware to be deployed within a single rack) and increasing power supply efficiency (OCP servers are designed to utilize a centralized, separate power supply, reducing the amount of power supplies needed and decreasing management and management costs). Facebook has claimed that its initial deployment of OCP server and rack infrastructure was 38% more energy efficient to build and 24% less expensive to run than the company’s previous facilities.
However, the OCP is also working to develop shared, open-source specifications and designs appropriate for companies that are even deploying only a limited number of servers – for example, the OCP mezzanine slot, a type of expansion card / slot specification for add-on networking / storage cards has been broadly adopted both by a large number of add-on card vendors (such as Mellanox and Broadcom) and server makers (such as GIGABYTE) alike.
How is GIGABYTE helpful?
GIGABTYE is an active member of the OCP, regularly attending the OCP’s annual summits and continuously designing and releasing new compute, storage and GPU server hardware based on the OCP’s Open Rack Standard specifications, known as our Data Center - OCP family of products.
GIGABYTE has also incorporated the OCP’s mezzanine expansion slot design into a large number of our standard Rack, High Density, GPU and Storage servers. An OCP mezzanine slot enables the installation of a compatible PCIe Gen 3.0 or Gen 4.0 networking or storage card with minimal use of space within the server while maximizing heat dissipation (and in the case of OCP 3.0, also allowing hot-swap installation / removal). This allows more of server’s standard PCIe expansion slots and free space to instead be utilized for the deployment of a maximum number of GPU or FPGA cards that require more physical space and airflow movement.
GIGABYTE has also incorporated the OCP’s mezzanine expansion slot design into a large number of our standard Rack, High Density, GPU and Storage servers. An OCP mezzanine slot enables the installation of a compatible PCIe Gen 3.0 or Gen 4.0 networking or storage card with minimal use of space within the server while maximizing heat dissipation (and in the case of OCP 3.0, also allowing hot-swap installation / removal). This allows more of server’s standard PCIe expansion slots and free space to instead be utilized for the deployment of a maximum number of GPU or FPGA cards that require more physical space and airflow movement.