-
Notifications
You must be signed in to change notification settings - Fork 1
Home
The Compact Cori Project started as a way to visually demonstrate (1) what is parallel computing and (2) how can supercomputers benefit from parallel computing. This is done so that students, visitors, and guests during a tour of NERSC (National Energy Research Scientific Center) can better understand the current state of supercomputers.
This project consists two main objectives: (1) setup Compact Cori, which is a cluster of 16 small computers, and (2) create a particle simulation program, which will be executed (or played) continuously on Compact Cori.
Compact Cori is a cluster of 16 nodes (or computers) that is designed to incorporate elements of Cori, NERSC’s next supercomputer (NERSC-8). Each of those nodes is an Intel NUC (Next Unit of Computing) with a memory size of 16 GB of DDR3 (double data rate type 3) and a storage size of 256 GB of SSD (solid state drive). Without the case, each node is about 4 inches by 4 inches, or about the size of an adult’s palm.
The following table shows the specific parts that were bought in order to create Compact Cori:
| Quantity | Description | Link | Notes |
|---|---|---|---|
| 16 | Intel NUC | http://amzn.com/B00SD9IS1S | |
| 16 | 16 GB DDR3 Memory | http://amzn.com/B007B5S52C | At least 8 GB of memory per NUC is recommended. |
| 16 | 256 GB 2.5” SSD | http://amzn.com/B00KFAGCWK | Alternatively, one can purchase a M.2 SSD. |
| 15 | USB Status Lights | https://blink1.thingm.com/buy/ | Debian setup: the lights will be attached to node 1 thru node 15. |
| 16 | USB Status Lights | https://blink1.thingm.com/buy/ | Ubuntu setup: the lights will be attached to all nodes. |
| 1 | 16 port switch | http://amzn.com/B000063UZW | |
| 1 | USB to Ethernet Adapter | http://amzn.com/B00ET4KHJ2 | This allows the master node to connect to the outside world. |
However, the patch cables for Compact Cori are created from an already existing spool of cable. Compact Cori will be placed in NERSC’s new Computational Research and Theory (CRT) building, right outside of the room that Cori is placed in. It will be housed in a custom fabricated acrylic display frame, created by TAP Plastics Incorporated. Only node #16 will be connected to a 70” display monitor. However, for
Although each node of Compact Cori has the same hardware implementations, they do not all have the same operating system installed in them. The initial plan was for all the nodes to be installed with Debian. However, due to graphic and display issues with Debian, the operating system for node #16 was switched to Ubuntu. Thus, node #1 through node #15 runs on Debian, whilst node #16 runs on Ubuntu.
Each node of Compact Cori is installed with Ubuntu 16.04 LTS.
This particle simulation program is really just a simple molecular dynamics simulation. It is separated into the front end and the back end. The front end is implemented with PlayCanvas (a 3D game engine), whilst the back end is implemented with Python (a programming language) and mpi4py (a message passing interface for Python). Data between the front end and the back end is passed via HTTP’s GET and POST requests.
The simulation contains a partially transparent box (or cube) with many particles bouncing in it. That box is simulated by N horizontally stacked partitions of equal size (in a similar manner like sliced bread), where N is at least 1. For the Debian setup, N can be at most is 14. For the Ubuntu setup, N can be at most is 15. Each particle is a sphere of radius of 30 units, even though each particle looks like a giant water molecule.
For the Debian setup, running this simulation program in parallel requires the workload to be separated into three types of jobs, called master node (for node #1), slave nodes (for node #2 through node #15), and visual node (for node #16). The master node and all the slave nodes handle the back end, whilst only the visual node handles the front end.
For the Ubuntu setup, running this simulation program in parallel requires the workload to be separated into two types of jobs, called master node (for node #1) and slave nodes (for node #2 through node #16). All nodes will handle the back end, but the master node will handle the front end too.
As the master of this simulation, node #1 is tasked with serving all GET and POST requests, which would only be requested by the visual node. It keeps the most up to date record of all the particles’ position and velocity. It also distributes the workload of the simulation whenever the number of partitions (or active slave nodes) is changed. So for each time step, node #1 does all of the above along with flashing the color white on its USB status light.
As the master of this simulation, node #1 is tasked with serving all GET and POST requests, which would only be requested by the visual node. It keeps the most up to date record of all the particles’ position and velocity. It also distributes the workload of the simulation whenever the number of partitions (or active slave nodes) is changed. So for each time step, node #1 does all of the above along with flashing the color white on its USB status light.
The master node is also tasked with displaying the simulation by continuously sending GET requests to the master node. Then it can be updated on the particles’ position and velocity. And it must send POST requests to the master node whenever the number of particles or the number of slave nodes is changed. It executes the front end of the simulation through PlayCanvas.
As the slave nodes of this simulation, node #2 through node #15 is tasked with simulating a partition of the box. For each time step, each slave node will calculate the particles’ position and velocity if those particles are in their partition. Each slave node will also flash its color on its USB status light. And the particles that are in their partition will take on that same color too.
Each particle can also travel into another partition (and thus, travel into another node) if that particle moves outside of its current partition. When that happens, the slave nodes will send that particle to the partition it is traveling towards by using an mpi4py call. Lastly, the slave nodes will update the master node on the particles’ position and velocity.
As the visual node of this simulation, node #16 is tasked with displaying the simulation by continuously sending GET requests to the master node. Then it can be updated on the particles’ position and velocity. And it must send POST requests to the master node whenever the number of particles or the number of slave nodes is changed. It executes the front end of the simulation through PlayCanvas.
The physics algorithm for this simulation is very simple: if the center of a particle and the center of another particle are within five times the diameter of a water molecule (or within 300 units), then those two particles are considered to have collided with each other, and their velocity vector will change.
Let N be the number of partitions in the simulation, and let P be the number of particles in the simulation:
- KEY_1 – set N to 1
- KEY_2 – set N to 2
- KEY_3 – set N to 3
- KEY_4 – set N to 4
- KEY_5 – set N to 5
- KEY_6 – set N to 6
- KEY_7 – set N to 7
- KEY_8 – set N to 8
- KEY_9 – set N to 9
- KEY_0 – set N to 10
- KEY_Q – set N to 11
- KEY_W – set N to 12
- KEY_E – set N to 13
- KEY_R – set N to 14
- KEY_R – set N to 15 (Ubuntu setup only)
- KEY_LEFT – decrease N by 1
- KEY_RIGHT – increase N by 1
- KEY_UP – increase P by 1
- KEY_DOWN – decrease P by 1
A GET request for this simulation contains a JSON formatted data containing a list of parameters (such as the number of particles, the number of partitions, the dimensions of the simulation box, the number of time steps per seconds, and the total energy contained in the box) and a list of particles to the requester, whilst a POST request for this simulation contains a JSON formatted data containing the new number of particles and the new number of partitions to the server.