-
Notifications
You must be signed in to change notification settings - Fork 3
Expand file tree
/
Copy path5_Evaluate.tex
More file actions
406 lines (368 loc) · 28.1 KB
/
5_Evaluate.tex
File metadata and controls
406 lines (368 loc) · 28.1 KB
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
\section{Performance Evaluation}
\label{sec:eval}
%The complementary evaluation methodology is essential to assess both the model-level and hardware-level performances.
%For a network model, its topology, neuron and synapse models, and training methods are major descriptions for any kind of neural networks, including SNNs.
%While the recognition accuracy, network latency and also the biological-time taken for both training and testing are specific performance measurements of a spike-based model.
%To build any SNN model on a hardware platform, its network size will be constrained by the scalability of the hardware;
%the neural and synaptic models can only be selected from the supported ones if the they are not programmable on the hardware;
%the accuracy of the result (e.g. recognition rate) will be affected by the precision of the membrane potential and synaptic weights;
%and an on-line learning algorithm cannot be implemented on a hardware if synaptic plasticity is not supported.
%Running an identical SNN model on different neuromorphic hardware platforms can benchmark their performances on simulation time and energy use.
A complementary evaluation methodology is essential to provide common metrics and assess both the model-level and hardware-level performance.
%Regarding the hardware-independent evaluation, specific measurements on SNNs take into consideration in addition to assessment of conventional neural networks.
%Additionally, the performance of the network, such as CA, simulation time and energy consumption may be influenced by the chosen hardware platform.
%Thus, when benchmarking different neuromorphic hardware, simulation time and energy consumption are the main concerns.
%A crucial part of research is reporting results and comparing achievements with other state-of-the-art work. Unfortunately there is no standard way of fulfilling those tasks, which, sometimes, leads to confusion for the reader. We feel that the neuroscience community would benefit from a common ground for NN characteristics, our thoughts on this matter are reflected in the first part of this section.
%As neuromorphic hardware is more commonly used in research, larger and more complex NN models could be used. Additionally, performance of the network may be influenced by the chosen development platform. We would like to assist with the following considerations when facing these impending changes.
\begin{table*}[hbt!]
\caption{Hardware independent comparison}
\begin{center}
\bgroup
\def\arraystretch{1.5}
\begin{tabular}{ l c c c c }
$ $ &
\begin{mycell}{1.9cm}Preprocessing\end{mycell} &
\begin{mycell}{3.5cm} Network\end{mycell} &
\begin{mycell}{3.5cm} Training \end{mycell} &
\begin{mycell}{3.5cm} Recognition \end{mycell} \\
% \cline{3-5}
% &
% &
% \begin{mycell}{3.5cm}Topology, neuron and synapse models, \\extra classifier \end{mycell} &
% \begin{mycell}{3.5cm}Methodology, simulation time, sample repetition \end{mycell} &
% \begin{mycell}{3.5cm}Events/$s$, time/sample, response time, accuracy\end{mycell} \\
\hline
%
\begin{mycell}{2.5cm}~\cite{brader2007learning} \end{mycell} &
\begin{mycell}{1.9cm} None \end{mycell} & % preprocessing
\begin{mycell}{3.5cm} Two layer, LIF neurons\end{mycell}& % network
\begin{mycell}{3.5cm} Semi-supervised, STDP, calcium LTP/LTD\end{mycell}& % training
\begin{mycell}{3.5cm} 96.5\% \end{mycell} \\% recognition
%
\begin{mycell}{2.5cm}~\cite{beyeler2013categorization} \end{mycell} &
\begin{mycell}{1.9cm} None \end{mycell} & % preprocessing
\begin{mycell}{3.5cm} V1 (edge), \\V4 (orientation),\\ and competitive decision, Izhikevich neurons\end{mycell}& % network
\begin{mycell}{3.5cm} Semi-supervised, STDP, \\ calcium LTP/LTD \end{mycell} & % training
\begin{mycell}{3.5cm} 91.6\% \\ 300~ms per test \end{mycell} \\% recognition
%
\begin{mycell}{2.5cm}~\cite{neftci2013event} \end{mycell} &
\begin{mycell}{1.9cm} Thresholding\end{mycell} & % preprocessing
\begin{mycell}{3.5cm} Two layer RBM, \\ LIF neurons \end{mycell}& % network
\begin{mycell}{3.5cm} Event-driven contrastive divergence, supervised \end{mycell}& % training
\begin{mycell}{3.5cm} 91.9\% \\ 1~s per test\end{mycell} \\% recognition
%
\begin{mycell}{2.5cm}~\cite{diehl2015unsupervised} \end{mycell} &
\centering None &
\begin{mycell}{3.5cm} Two layers, LIF neurons, inhibitory feedback \end{mycell}&
\begin{mycell}{3.5cm} Unsupervised, exp. STDP, %adaptive membrane potential,
$3,000,000$~s of training\\ $200,000$~s per iteration\end{mycell} &
\begin{mycell}{3.5cm} 95\% \end{mycell}\\
%
\begin{mycell}{2.5cm}~\cite{Diehl2015fast}\end{mycell} &
\begin{mycell}{1.9cm} None \end{mycell} & % preprocessing
\begin{mycell}{3.5cm} ConvNet or \\FCnet, LIF neurons \end{mycell}& % network
\begin{mycell}{3.5cm} Off-line trained with ReLU, weight normalization \end{mycell}& % training
\begin{mycell}{3.5cm} 99.1\% (ConvNet), \\ 98.6\% (FCnet);\\0.5~s per test\end{mycell}\\ % recognition
%
\begin{mycell}{2.5cm}~\cite{zhao2014feedforward}\end{mycell} &
\begin{mycell}{1.9cm} Thresholding\\ or DVS \end{mycell}& % preprocessing
\begin{mycell}{3.5cm} Simple (Gabor), \\Complex (MAX) \\and Tempotron \end{mycell}& % network
\begin{mycell}{3.5cm} Tempotron, supervised \end{mycell}& % training
\begin{mycell}{3.5cm} Thresholding \\ 91.3\%, 11~s per test \\ DVS \\ 88.1\%, 2~s per test\end{mycell}\\ % recognition
%
\begin{mycell}{2.5cm} % %\cite{Stromatias2015scalable} \\
This paper \end{mycell} &
\begin{mycell}{1.9cm} None \end{mycell} & % preprocessing
\begin{mycell}{3.5cm} Four layer RBM, \\ LIF neurons \end{mycell}& % network
\begin{mycell}{3.5cm} Off-line trained, unsupervised \end{mycell}& % training
\begin{mycell}{3.5cm} 94.94\%\\16 ms latency \end{mycell} \\% recognition
%
\begin{mycell}{2.5cm} This paper \end{mycell} &
\begin{mycell}{1.9cm} None \end{mycell}& % preprocessing
\begin{mycell}{3.5cm} FC decision layer, \\ LIF neurons \end{mycell}& % network
\begin{mycell}{3.5cm} K-means clusters,\\Supervised STDP\\$18,000$~s of training \end{mycell}& % training
\begin{mycell}{3.5cm} 92.98\%\\1~s per test\\10.70~ms latency\end{mycell}\\ % recognition
\end{tabular}
\egroup
\end{center}
\label{tb:software_comparison}
\end{table*}
\subsection{Hardware-Independent}
\label{subsec:model}
%As we are proposing spike based data-sets or a methodology to produce them, i
First of all it is desirable for researchers to specify whether they add any preprocessing either to images or spikes.
Filtering the raw input may ease the classification/recognition task while adding noise may require stronger robustness of the model.
%This is important, for example, if we want to use a modified set to test noisy inputs.
Secondly, as with the evaluation on conventional artificial neural networks, a description of the network characteristics is most welcome since it is the basis for the overall performance.
Furthermore, sharing the designs may inspire fellow scientists to bring new points of view to the problem and generate a positive feedback loop where everybody wins.
The network description should include the topology, and the neural and synaptic models.
The network topology defines the number of neurons used for each layer, and the connections between layers and neurons.
Some researchers make use of extra non-neural classifiers, sometimes to aid the design, others to enhance the output of the network.
Any particulars on this subject are greatly appreciated.
It is essential to state the type of neural and synaptic model (e.g. current-based LIF neuron) exploited in the network and the parameters configuring them, because neural activities differ greatly between various configurations.
%We believe the \emph{network description} should include some of the following elements:
%%{
%% \setlength{\leftmargini}{0.3cm}
%% \begin{itemize}
% %\item[] \hspace*{-0.3cm}
% \textit{Topology}, % \newline
% which includes the number of neurons, how they were arranged and the interaction between them;
% % What are the elements of the network (layers, populations)?
% % How are the different elements of the network arranged? Does the arrangement have any physical or biological interpretation? What are the interactions between them? What was the intuition behind this particular topology?
% %\item[] \hspace*{-0.3cm}
% the \textit{neuron model}, %\newline
% such as the activation function and any particulars of the implementation;
% %Does the model have a I\&F behaviour? What particular model was used?
% %\item[] \hspace*{-0.3cm}
% the \textit{synaptic plasticity model} %\newline
% is key for learning in the network, so the report should include a description of its characteristics (e.g. short-term and/or long-term).
% %In terms of synapse, what was the used? Does the network support short-term and/or long-term plasticity?
% %\item[] \hspace*{-0.3cm}
% %\textit{Extra classifier}. %\newline
% Some researchers make use of \emph{extra non-neural classifiers}, sometimes to aid the design, others to enhance the output of the NN. Any particulars on this subject are greatly appreciated.
% %\item[]
%% \end{itemize}
%%}
Thirdly, the learning procedure determines the recognition capability of a network model.
A clear distinction has always been made between supervised, semi-supervised and unsupervised learning.
A detailed description of new proposed spike-based learning rules will be a great contribution to the field due to the lack of spatio-temporal learning algorithms.
Most publications reflect the use of adaptations to existing learning rules, details on the modifications are highly desired.
In conventional computer vision, iterations of training images presented to the network play an important role.
Similarly, the biological time of training decides the amount of information provided.
%An important distinction to make, we think, is the nature of the \emph{training} procedure. A clear distinction has always been made between supervised, semi-supervised and unsupervised learning. Other specifications are the simulation time each sample is presented to the network, whether repeating samples was necessary and if they were presented continuously or with some ``silence'' was introduced between samples. Most publications reflect the use of adaptations to learning rules, details on the modifications are highly desired. A description of any additional weight-altering procedures used in the simulation are always welcome.
%Also, details on the particulars of the applied learning rule (e.g. STDP, BCM), was it modified somehow?
Finally in the testing phase where performance evaluation takes place, specific measurements of SNN models are essential in addition to recognition accuracy.
It should include details of the way samples were presented: event rates, and biological time per testing sample.
The combination of these two factors determines how much information is presented to the network.
An important performance metric is the response time (latency) of an SNN model.
A faster model is more suitable for real-time recognition systems such as neuromorphic robotics.
A commonly reported characteristic is the accuracy of the network, perhaps adding remarks on how these scores are obtained could help to unify criteria and ease comparison.
Work on SNN-based classifications of MNIST are listed in Table~\ref{tb:software_comparison} and evaluated on the proposed metrics.
%Most classification papers report a percentage of accuracy that gives the reader a measure of the correct classifications~\citep{dietterich1998approximate}. Some times it might be desirable, for a better understanding of the paper, that a distinction between ambiguous, outliers and incorrect classes is made~\citep{liu2002performance}. A very useful piece of information is clear citation of the base-line source, which is almost always there but lost in a sea of references.
%Should we report also incorrect or ambiguous? Could some ``correct'' be masking ambiguous? Were the ambiguous due to noise? Was the noise added on purpose?
%Traditionally, neural network training has been done using rate-based encoding. As new theories emerge, a
%One the biggest distinctions on learning procedures is whether they were done using some \emph{supervision} or not; making this distinction clear is vastly appreciated. On supervised learning, the label of the data influences to establish categories and connection weights. Unsupervised learning has fewer constraints when it comes to class creation but might be tougher to get right.
%A number of different classes are expected, this quantity might give an insight onto the network topology and dynamics. A description of the methods used to generate and populate the classes is very helpful for the reader. (e.g. Did we use a statistical measure? Was it a combination of NN with some other algorithms?)
\subsection{Hardware-Specific}
\label{subsec:hw}
\begin{table*}[thb!]
\caption{Hardware dependent comparison}
\begin{center}
\bgroup
\def\arraystretch{1.4}
\begin{tabular}{l c c c c c c}
$ $ &
\begin{mycell}{2.0cm} System \end{mycell} &
% \begin{minipage}{1.3cm}\centering Simulation type \end{minipage} &
\begin{mycell}{2.0cm} Neuron Model \end{mycell} &
\begin{mycell}{2.0cm}Synaptic\\Plasticity\end{mycell} &
% \begin{minipage}{1cm}\centering Axonal delays \end{minipage} &
% \begin{minipage}{1cm}\centering Synaptic model \end{minipage} &
\begin{mycell}{2.0cm} Precision \end{mycell} &
% \begin{minipage}{1.2cm}\centering Synaptic precision \end{minipage} &
% \begin{minipage}{1.2cm}\centering Energy per SE \end{minipage} &
% \begin{minipage}{1.4cm}\centering Synaptic ops per Watt \end{minipage} &
\begin{mycell}{2.0cm} Simulation\\Time \end{mycell} &
\begin{mycell}{2.0cm} Energy/Power \\Usage \end{mycell}
% \begin{minipage}{1.7cm}\centering Programming front-end \end{minipage}
\\
\hline
% contents!
\begin{mycell}{1.8cm} SpiNNaker \citep{stromatias2013power} \end{mycell} &
\begin{mycell}{2.0cm} Digital, \\Scalable \end{mycell} &
\begin{mycell}{2.1cm}Programmable\\Neuron/Synapse,\\Axonal delay \end{mycell}&
\begin{mycell}{2.1cm}Programmable\\learning rule\end{mycell}&
\begin{mycell}{2.0cm}11- to 14-bit synapses\end{mycell} &
\begin{mycell}{2.0cm} Real-time \\ Flexible time resolution \end{mycell} &
\begin{mycell}{2.5cm} 8~nJ/SE \\54.27 MSops/W \end{mycell} \\
%
\begin{mycell}{1.8cm} TrueNorth \citep{merolla2014million}\end{mycell} & \begin{mycell}{2.0cm}Digital, \\Scalable \end{mycell}&
\begin{mycell}{2.0cm}Fixed models,\\Config params,\\Axonal delay\end{mycell}&
\begin{mycell}{2.0cm}No plasticity\end{mycell}&
\begin{mycell}{2.2cm}122 bits \\params \& states,
% per neuron
\\ 4-bit synapse
\\(4 signed int + on/off state)
\end{mycell}&
\begin{mycell}{2.0cm}Real-time\end{mycell}&
\begin{mycell}{2.0cm}46 GSops/W\end{mycell} \\
%
\begin{mycell}{1.8cm} Neurogrid \citep{benjamin2014neurogrid}\end{mycell} &
\begin{mycell}{2.0cm}Mixed-mode,\\Scalable\end{mycell} &
\begin{mycell}{2.0cm}Fixed models,\\Config params\end{mycell} &
\begin{mycell}{2.0cm}Fixed rule\end{mycell} &
\begin{mycell}{2.0cm}13-bit shared \\ synapses\end{mycell} &
\begin{mycell}{2.0cm}Real-time\end{mycell} &
\begin{mycell}{2.0cm}941 pJ/SE\end{mycell} \\
%
\begin{mycell}{1.8cm} HI-CANN \citep{schemmel2010wafer} \end{mycell} & \begin{mycell}{2.0cm}Mixed-mode,\\Scalable\end{mycell} &
\begin{mycell}{2.0cm}Fixed models,\\Config params\end{mycell}&
\begin{mycell}{2.0cm}Fixed rule\end{mycell}&
\begin{mycell}{2.0cm}4-bit synapses\end{mycell}&
\begin{mycell}{2.0cm}Faster than\\ real-time\end{mycell}&
\begin{mycell}{2.0cm}198 pJ/SE \\ 13.5 MSops/W \\(network only) \end{mycell}\\
%
\begin{mycell}{1.8cm} iAER-IFAT \citep{yu201265k}\end{mycell} &
\begin{mycell}{2.0cm}Mixed-mode,\\Scalable\end{mycell} &
\begin{mycell}{2.0cm}Fixed models,\\Config params\end{mycell}&
\begin{mycell}{2.0cm}No plasticity\end{mycell} &
\begin{mycell}{2.0cm}Analogue neuron/synapse\end{mycell} &
Real-time&
20GSops/W
%dummy update text
\end{tabular}
\egroup
\end{center}
\label{tb:hardware_comparison}
\end{table*}
%
%\subsection{Hardware-Specific}
%\label{subsec:hw}
%\begin{table*}[thb!]
% \caption{Hardware dependent comparison}
% \begin{center}
% \bgroup
% \def\arraystretch{1.4}
% \begin{tabular}{l c c c c c c c}
% $ $ &
% \begin{mycell}{1.cm} System \end{mycell} &
%% \begin{minipage}{1.3cm}\centering Simulation type \end{minipage} &
% \begin{mycell}{1.0cm} Scalable \end{mycell} &
% \begin{mycell}{2.0cm} Programmable \end{mycell} &
%% \begin{minipage}{1cm}\centering Axonal delays \end{minipage} &
% \begin{mycell}{1cm}Synaptic Plasticity \end{mycell} &
% \begin{mycell}{2.0cm} Precision \end{mycell} &
%% \begin{minipage}{1.2cm}\centering Synaptic precision \end{minipage} &
%% \begin{minipage}{1.2cm}\centering Energy per SE \end{minipage} &
%% \begin{minipage}{1.4cm}\centering Synaptic ops per Watt \end{minipage} &
% \begin{mycell}{2.0cm} Simulation Time \end{mycell} &
% \begin{mycell}{2.0cm} Energy Use \end{mycell}
%% \begin{minipage}{1.7cm}\centering Programming front-end \end{minipage}
% \\
% \hline
% % contents!
% %
% \begin{mycell}{1.8cm} SpiNNaker~\citep{stromatias2013power} \end{mycell} &
% \begin{mycell}{1.cm} Digital \\ \end{mycell} &
% \begin{mycell}{1.0cm}Yes\end{mycell}&
% \begin{mycell}{2.1cm}Neuron/Synapse\\Axonal delay\\Learning rule \end{mycell}&
% Programmable &
% Programmable &
% \begin{mycell}{2.0cm} Real-time \\ Flexible time resolution \end{mycell} &
% \begin{mycell}{2.5cm} 8~nJ/SE \\54.27 MSops/W \end{mycell} \\
% %
% %
% \begin{mycell}{1.8cm} TrueNorth~\citep{merolla2014million}\end{mycell} & \begin{mycell}{1.cm}Digital \end{mycell}&
% \begin{mycell}{1.0cm}Yes\end{mycell}&
% \begin{mycell}{2.0cm}Configurable\end{mycell}&
% \begin{mycell}{1.0cm}No plasticity\end{mycell} &
% \begin{mycell}{2.0cm}122 bits per neuron \\(parameters + state)\\ $\sim$4-bit synapse \\ (4 signed integer choices + on/off state)\end{mycell}&
% \begin{mycell}{2.0cm}Real-time\end{mycell}&
% \begin{mycell}{2.0cm}46 GSops/W\end{mycell} \\
% %
% %
% \begin{mycell}{1.8cm} Neurogrid~\citep{benjamin2014neurogrid}\end{mycell} &
% \begin{mycell}{1.cm}Mixed mode\end{mycell} &
% \begin{mycell}{1.0cm}Yes %No (same as bellow?)
% \end{mycell} &
% \begin{mycell}{2.0cm}Configurable\end{mycell} &
% \begin{mycell}{1.0cm}Fixed\end{mycell} &
% \begin{mycell}{2.0cm}13-bit shared \\ synapses\end{mycell} &
% \begin{mycell}{2.0cm}Real-time\end{mycell} &
% \begin{mycell}{2.0cm}941 pJ/SE\end{mycell} \\
% %
% %
% \begin{mycell}{1.8cm} HI-CANN~\citep{schemmel2010wafer} \end{mycell} & \begin{mycell}{1.cm}Mixed mode\end{mycell} &
% \begin{mycell}{1.0cm}Yes %No (too hot?)
% \end{mycell}&
% \begin{mycell}{2.0cm}Configurable\end{mycell}&
% \begin{mycell}{1.0cm}Fixed\end{mycell} &
% \begin{mycell}{2.0cm}4-bit synapses\end{mycell}&
% \begin{mycell}{2.0cm}Faster than\\ real-time\end{mycell}&
% \begin{mycell}{2.0cm}198 pJ/SE \\ 13.5 MSops/W \\(network only) \end{mycell}\\
% %
% %
% \begin{mycell}{1.8cm} iAER-IFAT~\citep{yu201265k}\end{mycell} &
% \begin{mycell}{1.cm}Mixed mode\end{mycell}&
% \begin{mycell}{1.0cm}Yes\end{mycell}&
% \begin{mycell}{2.0cm}Configurable\end{mycell} &
% \begin{mycell}{1.0cm}Fixed\end{mycell} &
% \begin{mycell}{2.0cm}Analogue neuron/synapse\end{mycell} &
% Real-time&
% 20GSop/W
% %dummy update text
% \end{tabular}
% \egroup
% \end{center}
% \label{tb:hardware_comparison}
%\end{table*}
Depending on how neurons, synapses and spike transmission are implemented neuromorphic systems can be categorised as either analogue, digital, or mixed-mode analogue/digital VLSI circuits. Some analogue implementations exploit sub-threshold transistor dynamics to emulate neurons and synapses directly on hardware~\citep{indiveri2011neuromorphic} and are more energy-efficient while requiring less area than their digital counterparts~\citep{joubert2012hardware}. However, the behaviour of analogue circuits is largely determined during the fabrication process due to transistor mismatch~\citep{indiveri2011neuromorphic,pedram2006thermal,linares2003compact}, while their wiring densities render them impractical for large-scale systems. The majority of mixed-mode analogue/digital neuromorphic platforms, such as the High Input Count Analog Neural Network (HI-CANN)~\citep{schemmel2010wafer}, Neurogrid~\citep{benjamin2014neurogrid}, HiAER-IFAT~\citep{yu201265k}, use analogue circuits to emulate neurons and digital packet-based technology to communicate spikes as AER events. This enables reconfigurable connectivity patterns, while the time of spikes is expressed implicitly since typically a spike reaches its destination in less than a millisecond, thus fulfilling the real-time requirement. Digital neuromorphic platforms such as TrueNorth~\citep{merolla2014million} use digital circuits with finite precision to simulate neurons in an event driven manner to minimise the active power dissipation. Neuromorphic systems suffer from model flexibility, since neurons and synapses are fabricated directly on hardware with only a small subset of parameters exposed to the researcher.
SpiNNaker is a biologically inspired, massively-parallel, scalable computing architecture designed by the Advanced Processor Technologies (APT) group at the University of Manchester. SpiNNaker has been optimised to simulate very large-scale spiking neural networks in real-time~\citep{furber2014spinnaker}. SpiNNaker aims to combine the advantages of conventional computers and neuromorphic hardware by utilising low-power programmable cores and scalable event-driven communications hardware.
%\textit{\textbf{[ADD AS MUCH DETAILS ABOUT THE SPINNAKER ARCHITECTURE AND SOFTWARE HERE. Qian Liu: only hardware is ok?]}}.
A direct comparison between neuromorphic platforms is a non-trivial task due to the different hardware implementation technologies as mentioned above.
%Qian Liu modified
The metric proposed in Table~\ref{tb:hardware_comparison} attempts to expose the advantages and disadvantages of different neuromorphic hardware thus to find out the network properties each platform is suited to.
The scalability of a hardware platform determines the network size limit of a neural application running on it.
Considering the various neural, synaptic models, plasticity learning rules and lengths of axonal delays, a programmable platform is flexible for diverse SNNs while a hard-wired system supporting only specific models wins for its simpler design and implementation.
%Comparison metrics could be the precision used to describe the membrane potential of neurons (for the digital platforms) and synaptic weights.
The classification accuracy of a SNN running on a hardware system can be different from the software simulation, since hardware implementation limits on the precision used for the membrane potential of neurons (for the digital platforms) and the synaptic weights.
Thus comparison metrics is supposed to include precision as a major assessment of the system performance.
Simulation time is another important measure of running large-scale networks on hardware.
Real-time implementation is an essential requirement for robotic systems because of the real-time input from the neuromorphic sensors.
Running faster than real time is attractive for large/long simulations.
However, due to the limitation of hardware resources simulation time may accelerate or slow down according to the network topology and spike dynamics.
Also finer time resolution plays an important role in precision sensitive neural models or in sub-millisecond tasks~\citep{lagorce2015breaking}.
%Qian Liu done
Comparing the performance of each platform in terms of energy requirements is an interesting comparison metric especially if targeted for mobile applications and robotics. Some researchers have suggested the use of energy per synaptic event (J/SE)~\citep{sharp2012power,stromatias2013power} as an energy metric because the large fan in and out of a neuron tend to dominate the total energy dissipation during a simulation. Merolla et al. proposed the number of synaptic operations per Watt (Sops/W)~\citep{merolla2014million}.
These two measurements are the same presentations of energy use of synaptic events, since J/SE$\times$Sops/W = 1 s.
%Qian Liu modified
For a particular SNN application or benchmark, the scalability and programmability will determine whether the network is able to run on a platform.
The system performance will be assessed on the accuracy, simulation time and energy use running the network.
Table~\ref{tb:hardware_comparison} aims to summarise the aforementioned hardware comparison metrics.
% mention spinnaker, for this work
%power
%real-time or accelerated
% latency
%Specifying hardware is of utmost importance when comparing computing times and power consumption. Analog or digital or a hybrid system.
%
%Different platforms have special benefits, purely hardware solutions have low power consumption but lack programmability.
%
%New theories, such as Polychronization, suggest that axonal delays are an integral part of the brain's computing mechanisms~\citep{Izhikevich2005}.
%
%Power consumption is a key issue for mobile applications and robotics. A way to measure is to state the number of \emph{Synaptic operations per Watt} that the hardware is capable of.
%
%An important factor to measure performance is the number of \emph{Synaptic events per second}; i.e. rough throughput
%
%A piece of hardware that is difficult to use/program is of little use, thus \emph{front-end support} is
%
%Neural activity highly depends on synapses, specifying what model was used and its precision will impact on the performance.
%\begin{table*}[t!]
% \caption{Hardware dependent comparison}
% \begin{center}
% \bgroup
% \def\arraystretch{1.4}
% \begin{tabular}{l | c c c c c c c c c}
% $ $ &
% \begin{minipage}{1.2cm}\centering Hardware approach \end{minipage} &
% \begin{minipage}{1.3cm}\centering Simulation type \end{minipage} &
% \begin{minipage}{1.7cm}\centering Programmable \end{minipage} &
% \begin{minipage}{1cm}\centering Axonal delays \end{minipage} &
% \begin{minipage}{1cm}\centering Synaptic model \end{minipage} &
% \begin{minipage}{1.2cm}\centering Synaptic precision \end{minipage} &
% \begin{minipage}{1.2cm}\centering Energy per SE \end{minipage} &
% \begin{minipage}{1.4cm}\centering Synaptic ops per Watt \end{minipage} &
% \begin{minipage}{1.7cm}\centering Programming front-end \end{minipage} \\
% \hline
% % contents!
% \begin{minipage}{1.8cm}\centering \vspace*{0.1cm} SpiNNaker~\citep{stromatias2013power} \end{minipage} &Digital& & & & & & 8~nJ &54.27 MSops/W & \\
% \begin{minipage}{1.8cm}\centering \vspace*{0.1cm} TrueNorth~\citep{merolla2014million}\end{minipage} &Digital& & & & & & &46 GSops/W & \\
% \begin{minipage}{1.8cm}\centering \vspace*{0.1cm} Neurogrid~\citep{benjamin2014neurogrid}\end{minipage} &Mixed-mode& & & & & & & & \\
% \begin{minipage}{1.8cm}\centering \vspace*{0.1cm} HI-CANN~\citep{Schemmel_etal10} \end{minipage} &Mixed-mode & & & & & & & & \\
% \begin{minipage}{1.8cm}\centering \vspace*{0.1cm} iAER-IFAT~\citep{gert}\end{minipage} &Mixed-mode & & & & & & & &
%
% \end{tabular}
% \egroup
% \end{center}
% \label{tb:hardware_comparison}
%\end{table*}
%table summary?