no clue why digit pattern doesn't work on 2015 orchestrated paper

Request or post your example code here
nsf2000
Posts: 16
Joined: Thu Nov 05, 2015 2:35 pm

no clue why digit pattern doesn't work on 2015 orchestrated paper

Post by nsf2000 »

Hi,

I have tried to fetch the digit number patterns many times into 2015 orchestrated repository. the patterns in the first and second stage (1run_init, 2run_learn) is shown below (28 by 28 pixals). Since the 28x28 is much smaller than the 64x64 in origin paper, I changed the possion to excitatory neuron connection sparseness to 0.5. Then, I followed the forum method to regenerate rf1.pat file based on PSTH. I found that for each patterns, the neuron firing population is only around 120-130 for each pattern if I fetch 5 patterns (0 to 4 digits), 10 patterns is even worse (around 102), and origin paper has nearly 600 neuron fires for each pattern.

After the training and testing, when I looked at the pact file, it looks like this:

Code: Select all

3597.900000 34.999999 1.000000 34.999999 1.000000 34.999999 1.000000 34.999999 1.000000 34.999999 1.000000 0.159180
3598.000000 34.999999 1.000000 34.999999 1.000000 34.999999 1.000000 34.999999 1.000000 34.999999 1.000000 0.102539
3598.100000 34.999999 1.000000 34.999999 1.000000 34.999999 1.000000 34.999999 1.000000 34.999999 1.000000 0.134766
3598.200000 36.666666 1.000000 36.666666 1.000000 36.666666 1.000000 36.666666 1.000000 36.666666 1.000000 0.101562
3598.300000 31.666666 1.000000 31.666666 1.000000 31.666666 1.000000 31.666666 1.000000 31.666666 1.000000 0.154297
3598.400000 23.333333 1.000000 23.333333 1.000000 23.333333 1.000000 23.333333 1.000000 23.333333 1.000000 0.111328
3598.500000 38.333333 1.000000 38.333333 1.000000 38.333333 1.000000 38.333333 1.000000 38.333333 1.000000 0.126953
3598.600000 31.666666 1.000000 31.666666 1.000000 31.666666 1.000000 31.666666 1.000000 31.666666 1.000000 0.148438
which means that for every stimulis, the network cannot classify them. I was totally lost. I would appreciate if you can provide any guides. I attached my input pat file (num.zip, please change the suffix to num.pat), which suppose to be correct.

Also, When I change my external sparseness to be 0.5, the runing network throw the exception saying that:

Code: Select all

(!!) on rank 1: SparseConnection: (P10Connection): Buffer full after pushing 2104392 elements. There are pruned connections!
I was also wondering whether the network doesn't show cell assemble because of my wrong configuration, or other reasons.

Thank you!

Best,
Sufeng
Attachments
num.zip
(5.46 KiB) Downloaded 372 times
1.jpg
1.jpg (956 Bytes) Viewed 15313 times
0.jpg
0.jpg (1.14 KiB) Viewed 15313 times
User avatar
zenke
Site Admin
Posts: 156
Joined: Tue Oct 14, 2014 11:34 am
Location: Basel, CH
Contact:

Re: no clue why digit pattern doesn't work on 2015 orchestrated paper

Post by zenke »

Hi,

I can only recommend looking at the spiking network activity to get a feel how the network responds. Pattern activity output is only useful to quantify a phenomenon which should be visible by eye in the spike raster plots. Also, are you sure that the change to sparseness for the input connections does have any effect, because in the example the random matrix is overwritten with a structured matrix with circular receptive field positions as described in the paper.

Best,

F
nsf2000
Posts: 16
Joined: Thu Nov 05, 2015 2:35 pm

Re: no clue why digit pattern doesn't work on 2015 orchestrated paper

Post by nsf2000 »

Hi Zenke,

Thank you so much for your suggestions. Sorry for I made a mistake here, the sparseness will not affect the possion to excitation since it is overwritten by the rf_discsR8.mtx file. my results differences is caused by connection among internal connection sparseness between excitation and inhibitory neurons.

I plot the rf_discsR8.mtx file which shown in attachment 1 image, Can I know why you set initial receptive field file looks like this? And this looks different with the paper which mentioned as circular receptive field (I think I misunderstand your meaning)

since ras file is the only thing I can observe, the output ras plot is shown in attachment figure 2, which doesn't show any subpopulation firing activity. I was guessing my problem is the original receptive field(rf_discsR8.mtx, which is 64 by 64) mismatched with my input (28 by 28). which caused the problem. Do you agree with it?

How can I re-create the receptive fields like Figure 3e in the paper? I checked the .sse and .see file, however the online tutorial doesn't give any hints on what each column means. I searched the source code, it shows that each column in the file is the neuron mat output, while in the default mode, each row of the sse and see file is one specific neuron weight matrix?

If I need to test the sequential input, for example, I want to see that if I repeat input 1 2 3 4 to let the network learn it, what gonna happen if I input 1 2 3 without 4 (I am expecting '4' pattern subpopulation fired). I need to write another file which inherit the StimulusGroup file, correct?

One more thing is in the paper, the synaptic plasticity among inhibitory are fixed. Can I know the reason? is that because of the inhibitory synaptic plasticity unclear in the current research?

I do I apologize that I have too much questions because of my limited knowledge, and I don't want to bother you too much. Sorry about that.

Best,

Sufeng
Attachments
receptive field.jpg
receptive field.jpg (54.51 KiB) Viewed 15306 times
ras.jpg
ras.jpg (148.25 KiB) Viewed 15306 times
User avatar
zenke
Site Admin
Posts: 156
Joined: Tue Oct 14, 2014 11:34 am
Location: Basel, CH
Contact:

Re: no clue why digit pattern doesn't work on 2015 orchestrated paper

Post by zenke »

nsf2000 wrote:Hi Zenke,

Thank you so much for your suggestions. Sorry for I made a mistake here, the sparseness will not affect the possion to excitation since it is overwritten by the rf_discsR8.mtx file. my results differences is caused by connection among internal connection sparseness between excitation and inhibitory neurons.

I plot the rf_discsR8.mtx file which shown in attachment 1 image, Can I know why you set initial receptive field file looks like this? And this looks different with the paper which mentioned as circular receptive field (I think I misunderstand your meaning)
Hi Sufeng,

As I explained earlier you will have to embed your 28x28 images into the 64x64 input space. Remember that the input connections connect the input with the intrinsic network space. The circular receptive fields "exist" only in this 64x64 input space. Although they have the same dimensionality of 4096, these are two different spaces. Thus each column (note that Auryn assumes left multiply to put the indices of weight matrices in the "right" order -- i.e. pre, post) of the input matrix is one image in the input space. I don't understand what you are plotting in the figure. rf_discsR8.mtx is a MatrixMarket file as well. The different extension comes from Python scipy which uses this one as default. You can create your own using Python or MATLAB. Just make sure you have the right order rows/columns as described in the wiki https://www.fzenke.net/auryn/doku.php?id=manual:wmat.

nsf2000 wrote: since ras file is the only thing I can observe, the output ras plot is shown in attachment figure 2, which doesn't show any subpopulation firing activity. I was guessing my problem is the original receptive field(rf_discsR8.mtx, which is 64 by 64) mismatched with my input (28 by 28). which caused the problem. Do you agree with it?
Your raster plot suggests that all neurons initially respond too weakly to the stimulus. If neurons do not cross the plasticity threshold initially the patterns are unlearned from the input (LTD regime). That's all explained in the paper.

nsf2000 wrote: How can I re-create the receptive fields like Figure 3e in the paper? I checked the .sse and .see file, however the online tutorial doesn't give any hints on what each column means. I searched the source code, it shows that each column in the file is the neuron mat output, while in the default mode, each row of the sse and see file is one specific neuron weight matrix?
If I remember correctly the .sse and .see contain time series of mean weights between cell assemblies or within cell assemblies. For the rest see above.
nsf2000 wrote: If I need to test the sequential input, for example, I want to see that if I repeat input 1 2 3 4 to let the network learn it, what gonna happen if I input 1 2 3 without 4 (I am expecting '4' pattern subpopulation fired). I need to write another file which inherit the StimulusGroup file, correct?
StimulusGroup has a function for that which should work.
Please check http://www.fzenke.net/auryn/doxygen/cur ... Group.html
and http://www.fzenke.net/auryn/doxygen/cur ... da69f84efb
you would need SEQUENTIAL.
nsf2000 wrote: One more thing is in the paper, the synaptic plasticity among inhibitory are fixed. Can I know the reason? is that because of the inhibitory synaptic plasticity unclear in the current research?
That is correct.

Sorry for being brief, don't have more time right now. I hope that helps.

Best,

Friedemann
User avatar
zenke
Site Admin
Posts: 156
Joined: Tue Oct 14, 2014 11:34 am
Location: Basel, CH
Contact:

Re: no clue why digit pattern doesn't work on 2015 orchestrated paper

Post by zenke »

Btw, how many cores are you using to run the sim? And are you using v0.5 or a newer version? There is an issue which affects the synaptic strength if you use a different number than 4 cores. See http://www.fzenke.net/auryn/forum/viewt ... p?f=3&t=30 that could also contribute to your problems.
nsf2000
Posts: 16
Joined: Thu Nov 05, 2015 2:35 pm

Re: no clue why digit pattern doesn't work on 2015 orchestrated paper

Post by nsf2000 »

Hi Zenke,

Thank you so much for your answer! Sorry for uncleared questions and info that may confuse you. I used 8 cores and the latest version of auryn, I just aware this development branch, I have switched to it. I guess I understand some basic neuroscience concepts incorrectly so that I create so many confusions, in your answer:
The circular receptive fields "exist" only in this 64x64 input space
. my understanding is that the receptive field only exist in Poisson neuron grounp (64x64), correct? and the rf_discsR8.mtx, which is the matrix weight (although all connection are either 0 or 1) represent poisson neuron connections. If this is true, Figure 3a and 3e in the paper represent the plasticity strength between input and excitation?

I am trying to make the spiking neural network can process different size of inputs, which is not limited by 64x64. I think that tiling 4 28x28 together as one 64x64 pixel may work, but I feel like it is a kind of engineering hack, not what biological suppose to be. The figure I plot and you mentioned that confused one is the rf_discsR8.mtx, which I understand it as the matrix weights that how poisson neuron (input stimulus) map to excitation neuron. What I did is that instead of tiling 4 28x28 input, I change the connection density to see if more connections between poisson and excitatory neuron can compensate the input size mismatched issue. That is the motivation why I change the sparseness and plot out the rf_discsR8.mtx to see what they looks like. At first I misclassify rf_discR8.mtx is the receptive field of poisson neuron plot as Figure 3a in the paper.

I also found that the neuron initially response to the stimulus are too weak when I checked generated rf.pat file. In the paper you said:
...However, when weights were initially too weak or afferent connections were chosen randomly (that is, no predefined spatial receptive fields) all neurons remained at the low activity fixed point and no assembly structure was formed.
This behaviour was changed when we allowed the strength of LTD to change through homeostatic metaplasticity on the timescale of tens of minutes to hours...
I think this part I am not well understood. I understand homeostatic metaplasticity as consolidation (I may be wrong here). How can I enable the metaplasticity in the initial stage so that I can get stronger responses among neurons?

Thank you very much :P

Best,
Sufeng
User avatar
zenke
Site Admin
Posts: 156
Joined: Tue Oct 14, 2014 11:34 am
Location: Basel, CH
Contact:

Re: no clue why digit pattern doesn't work on 2015 orchestrated paper

Post by zenke »

Hey,

yes one receptive field is defined by the strengths of all afferent connections to one neuron in the network. And for visualization we think of these 4096 neurons as 64x64 grid. So by plotting the connection strengths in a 64x64 matrix you get an image of the receptive field (eg. Fig3e). I am attaching a script with which you can make new input connections (the mtx file in the example) with a higher connection density (sparseness). You can change the parameter R in the file for a different disc radius. If I understand you correctly you want bigger circles. When you run the script it will plot receptive fields of the first 10 neurons in an overlay so you can get an idea of the coverage per neuron. When you close the window the script goes on to generate disc-like receptive fields for all 4096 network neurons and stores this to an mtx file.

The second part of your post refers to the other connection objects in the repository and the Supplementary Figures in the paper. If we allow the plasticity threshold to slide slowly the network can still learn. However, how such threshold movement is implemented in biology is still very much debated. However, you should have a look at the BCM paper which pretty much introduced this concept of a sliding threshold. Here is the reference
Bienenstock, E., Cooper, L., and Munro, P. (1982). Theory for the development of neuron selectivity: orientation specificity and binocular interaction in visual cortex. J Neurosci 2, 32–48.
http://www.jneurosci.org/content/2/1/32.abstract


By the way, just out of curiosity what SpeedFactor do you get when running on 8 cores? The SpeedFactor is given in the log file at the end of each simulation.
Attachments
mk_structured_input_connections.py.gz
(544 Bytes) Downloaded 416 times
nsf2000
Posts: 16
Joined: Thu Nov 05, 2015 2:35 pm

Re: no clue why digit pattern doesn't work on 2015 orchestrated paper

Post by nsf2000 »

Thank you very much! I will try your script, and look at the paper you refered. One more question, I properly should not post it here, but just curiosity, current the network exhibit the learning by the subpopulation firing rate, I looked at some literatures mentioned these excitation subpopulation may have competition mechanism so that finally only one pattern will be determined, and rest of them are inhibited. So in the 2015 orchestrated, for example that when neuron see the cued rectangle (noisy rectangle), the other subpopulation excitation neurons suppose to be inhibited in some sense. What do you think? :P

for 8 cores, the speed factor of 1run_init is around 0.39, 2run_learned is around 0.4, and 3run_cuded is around 0.38 for different process.

Thank you very much!

Sufeng
User avatar
zenke
Site Admin
Posts: 156
Joined: Tue Oct 14, 2014 11:34 am
Location: Basel, CH
Contact:

Re: no clue why digit pattern doesn't work on 2015 orchestrated paper

Post by zenke »

Hi there,

the cross-inhibition between patterns that you are talking about is achieved through global inhibition from the inhibitory population. You are right, otherwise the entire network would not work and all cells would end up being active. Since inhibitory weights are learned and the learning rule has a Hebbian component, presumably the inhibition is to a certain degree also specific. However, I did not investigate this in the paper. Global inhibition is the standard.

Thanks for those numbers. I get similar values using only 4 cores. Of course we have different computers, but you could check if you actually need to use 8 cores. The simulations tend to get slower again if the number of cores is too large (cf. http://journal.frontiersin.org/article/ ... 6/abstract).

Cheers,

Friedemann
nsf2000
Posts: 16
Joined: Thu Nov 05, 2015 2:35 pm

Re: no clue why digit pattern doesn't work on 2015 orchestrated paper

Post by nsf2000 »

Hello,

Again, thank you very much! Final questions (I don't want to ask you to be endless :P), this global inhibitory I understand it as Dopamine effect for reinforcement learning (in the paper mentioned as global secret factor???). Am I understand correctly here?

For the computation cores, yes, the computer I used has 8 cores 16 threads in total, and I used htop command to monitor in real time to see how much computation resources it utilized. I guess the slowness may caused by process synchronization and there is an Amdahl's law for maximum speedup in parallel computing.

Thank you!
Regards,
Sufeng
Post Reply