Changes

Jump to: navigation, search

Enterprise Hyperscale Lab

406 bytes added, 11:21, 16 March 2015
no edit summary
The EHL consists of a dual-thermal-zone (cold/hot) rackmount cabinet with power conditioning and backup, power distribution, thermal monitoring, and 1- and 10-gigabit network services. This cabinet supports a large number of hyperscale and SOC-based ARM computers for various applied research projects.
 
A second equipment cabinet very similar to the first is on order and will be installed in Summer 2015.
== Equipment Detail ==
=== Cabinet ===
The Each EHL cabinet is a [http://www.silentium.com/?page_id=33 Silentium Accoustirack Active], a full-height acoustically-insulated rackmount cabinet with two fan units. The lower fan unit takes air from outside the cabinet and blows it up the front of the rackmount equipment area (cold zone). Air passes through the individual devices and is vented out the back of each unit into the hot zone. A second fan unit exhausts air from the hot zone out the top of the cabinet.
Each of the fan units includes active noise cancellation so that the loaded rack can be operated in a software development lab context.
Devices which are not configured with dual PSUs are connected to just one of the PDUs.
The total draw of the EHL equipment installed in the first cabinet as of January 2015 was approximately 1.8 kWh under load. The current power system can support a little over 3 kWh; the cabinet supports thermal exchange of about 8 kWh.
=== Environmental Monitoring ===
=== Networking ===
One Cisco 24-port gigabit switch and one Netgear 24-port 10-gigabit switch are installed in the back of the each EHLcabinet. The 10g switch provides both 10GBASE-T and SFP+ connections. Where possible, SFP+/DA (Direct Attach) copper cables are used because they are simpler and less expensive than fiber optic cables, and yet offer much lower latency than 10GBASE-T connections (2 nS vs 2 mS - one million times less latency); other connections are made with fibre optic transcievers or 10GBASE-T copper connectiosn are required. Devices which do not support SFP+ 10 gigabit connection are connected with 10GBASE-T1 gigabit or 100 Mbit ethernet. The connection between the EHL cabinets is made by a fibre optic 10 gigabit connection.
Connections between the EHL LAN and the outside world are provided by a [http://www.compulab.co.il/utilite-computer/web/utilite-overview Utilite], a small ARM computerinstalled in cabinet 1. This computer acts as a [http://www.diablotin.com/librairie/networking/firewall/ch04_02.htm dual-homed host] that provides firewall, NAT, forwarding, DNS, and VPN endpoint services.
=== Storage ===
Storage is provided by a Synology Rackstationin cabinet 1, which provides both storage area network (SAN, raw block devices over protocols such as iSCSI) and network-attached storage (NAS, filesystem-level shared block devices over protocols such as NFS and SMB). It is populated with twelve 1 TB SSDs and equipped with dual power supplies and dual 10-gigabit ethernet.
=== Terminal Server ===
Many of the computers installed in the EHL do not have video output (because they're not intended for desktop applications). Most of these have a serial port; in many cases, this is a virtual serial port which is accessed using the ICMP SOL (Serial-over-LAN) protocol on the network. Since this is not a TCP/IP protocol, the client must run somewhere on the LAN.
For systems that do not have a working ICMP engine, the each EHL cabinet is equipped with a [[Cyclades Terminal Server]] which provides remote access to 32 serial ports. A remote user can connect to a selected port to monitor and control the connected system.
=== Display ===
The EHL Cabinet 1 is equipped with a 15" 4:3 LCD monitor, bolted to a 4u blanking panel. This display is driven by a Raspberry Pi, and can be used to show educational information about the rack, current system status information, or diagnostic data.
=== Calxeda/Boston Viridis ARM System ===
32-bit ARM compute is provided by a Calxeda Energy Core ECX-1000 system provided by from Boston Limited. There are four three installed "Energy Cards", each with 4 ECX-1000 nodes, which each have a quad-core ARM Cortex-A9 processor and a small (Cortex-M) ARM management processor. This system runs the [[Pidora]] build system (except for the Koji hub and web nodes).
=== 64-Bit ARM Compute ===

Navigation menu