> )+&'(}Y ?^bjbjWW :==d
]11111118221=3v5555
..L5/(9/9/9/9/9/9/9$d?XAS9e1]/s
.]/]/S9s0553 s0s0s0]/515911]/9s0s0;37b(195p3DFf11/9"Similar to : Neural Computation (2001) 13: 477504
The limits of counting accuracy in distributed neural representations
A.R. GardnerMedwin1 & H.B. Barlow2
1Dept. of Physiology, University College London, London WC1E 6BT, UK
and 2Physiological Laboratory, Cambridge CB2 3EG, UK
Keywords: counting, representation, learning, overlap, sensory coding, efficiency, frequency, association, adaptation, attention
Learning about a causal or statistical association depends on comparing frequencies of joint occurrence with frequencies expected from separate occurrences, and to do this events must somehow be counted. Physiological mechanisms can easily generate the necessary measures if there is a direct, onetoone, relationship between significant events and neural activity, but if the events are represented across cell populations in a distributed manner, the counting of one event will be interfered with by the occurrence of others. Although the mean interference can be allowed for, there is inevitably an increase in the variance of frequency estimates that results in the need for extra data to achieve reliable learning. This lowering of statistical efficiency (Fisher, 1925) is calculated as the ratio of the minimum to actual variance of the estimates. We define two neural models, based on presynaptic and Hebbian synaptic modification, and explore the effects of sparse coding and the relative frequencies of events on the efficiency of frequency estimates. High counting efficiency must be a desirable feature of biological representations, but the results show that the number of events that can be counted simultaneously with 50% efficiency is less than the number of cells or 0.10.25 of the number of synapses (on the two models), i.e. many fewer than can be unambiguously represented. Direct representations would lead to greater counting efficiency, but distributed representations have the versatility to detect and count many unforeseen or rare events. Efficient counting of rare but important events requires that they engage more active cells than common or unimportant ones. The results suggest reasons why representations in the cerebral cortex appear to use extravagant numbers of cells and modular organisation, and they emphasise the importance of neuronal trigger features and the phenomena of habituation and attention.
1 Introduction
The world we live in is highly structured, and to compete in it successfully an animal has to be able to use the predictive power that this structure makes possible. Evolution has moulded innate genetic mechanisms that help with the universal basics of finding food, avoiding predators, selecting habitats, and so forth, but much of the structure is local, transient, and stochastic, rather than universal and fully deterministic. Higher animals greatly improve the accuracy of their predictions by learning about this statistical structure through experience: they learn what sensory experiences are associated with rewards and punishments, and they also learn about contingencies and relationships between sensory experiences even when these are not directly reinforced.
Sensory inputs are graded in character, and may provide weak or strong evidence for identification of a discrete binary state of the environment such as the presence or absence of a specific object. Such classifications are the data on which much simple inference is built, and about which associations must be learned. Learning any association requires a quantitative step in which the frequency of a joint event is observed to be very different from the frequency predicted from the probabilities of its constituents. Without this step, associations cannot be reliably recognised, and inappropriate behaviour could result from attaching too much importance to chance conjunctions or too little to genuine causal ones. Estimating a frequency depends in its turn on counting, using that word in the rather general sense of marking when a discrete event occurs and forming a measure of how many times it has occurred during some epoch.
Counting is thus a crucial prerequisite for all learning, but the form in which sensory experiences are represented limits how accurately it can be done. If there is at least one cell in a representation of the external world that fires in onetoone relation to the occurrence of an event (ie if that event is directly represented according to our definitions see Box 1), then there is no difficulty in seeing how physiological mechanisms within such a cell could generate an accurate measure of the event frequency. On the other hand there is a problem when the events correspond to patterns on a set of neurons (ie with a distributed representation Box 1). In a distributed representation a particular event causes a pattern of activity in several cells, but even when this pattern is unique, there is no unique element in the system that signals when the particular event occurs and does not signal at other times. Each cell active in any pattern is likely to be active for several different events during a counting epoch, so no part of the system is reliably active when, and only when, the particular event occurs.
The interference that results from this overlap in distributed representations can be dealt with in two ways: (1) cells and connections can be devoted to identifying directly in a onetoone manner when the patterns occur, i.e. a direct representation can be generated, or (2) the interference can be accepted and the frequency of occurrence of the distributed patterns estimated from the frequency of use of their individual active elements. The second procedure is liable to increase the variance of estimated counts, and distributed representation would be disadvantageous when this happens because the speed and reliability of learning would be impaired.
On the other hand, distributed representation is often regarded as a desirable feature of the brain because it brings the capacity to distinguish large numbers of events with relatively few cells (see for instance Hinton & Anderson 1981; Rumelhart & McClelland 1986; Hinton, McClelland & Rumelhart 1986; Churchland 1986; Farah 1994). With sparse distributed representations, networks can also operate as contentaddressable memories that store and retrieve amounts of information approaching the maximum permitted by their numbers of modifiable elements (Willshaw et al., 1969; GardnerMedwin, 1976).
Recently Page (2000) has emphasised some disadvantages of distributed representations and argued that connectionist models should include a "localist" component, but we are not aware of any detailed discussion of the potential loss of counting accuracy that results from overlap, so our goal in this paper was to analyse this quantitatively. To give the analysis concrete meaning we formulated two specific neural models of the way frequency estimates could be made. Neither is intended as a direct model of the way the brain actually counts, nor do we claim that counting is the sole function of any part of the brain, but the models help to identify issues that relate more to the nature of representations than to specific mechanisms. Counting is a necessary part of learning, and representations that could not support efficient counting could not support efficient learning.
We express our results in terms of the reduction in statistical efficiency (Fisher, 1925) of these models, since this reveals the practical consequences of the loss of counting accuracy in terms of the need for more experience before an association or contingency can be learned reliably. We do not know of any experimental measures of the statistical efficiency of a learning task, but it has a long history in sensory and perceptual psychology where, for biologically important tasks, the efficiencies are often surprisingly high (Rose, 1942; Tanner & Birdsall, 1958; Jones, 1959; Barlow, 1962; Barlow & Reeves, 1979; Barlow & Tripathy; 1997).
From our analysis we conclude that compact distributed representations (i.e. ones with little redundancy) enormously reduce the efficiency of counting and must therefore slow down reliable learning, but this is not the case if they are redundant, having many more cells than are required simply for representation. The analysis enables us to identify principles for sustaining high efficiency in distributed representations, and we have confirmed some of the calculations through simulation. We think these results throw light on the complementary advantages of distributed and direct representation.
1.1 The statistical efficiency of counting. The events we experience are often determined by chance, and it is their probabilities that matter for the determination of optimal behaviour. Probability estimates must be based on finite samples of events, with inherent variability, and accurate counting is advantageous insofar as it helps to make the most efficient use of such samples. For simplicity, we analyse the common situation in which the numbers of events follow (at least approximately) a Poisson distribution about the mean, or expected, value. The variance is then equal to the mean ((), and the coefficient of variation (i.e. standard deviation mean) is 1/((.
A good algorithm for counting is unbiased, i.e. on average it gives the actual number within the sample, but it may nevertheless introduce a variance V. This variance arises within the nervous system, in a manner quite distinct from the Poisson variance whose origin is in the environment; we assume they are independent and therefore sum to a total variance V+(. It is convenient to consider the fractional increase of variance, caused by the algorithm in a particular context, which we call the relative variance (r):
r = V / m (1.1)
Adding variance to a probability estimate has a similar effect to making do with a smaller sample, with a larger coefficient of variation. Following Fisher (1925) we define efficiency e as the fraction of the sample that is effectively made use of:
e = ( /( ( +V) = (1 + r)1 (1.2)
Efficiency is a direct function of r, and if r>1 then e <50%, which means that the time and resources required to gather reliable data will be more than two times greater than is in principle achievable with an ideal algorithm. If r<<1 then efficiency is nearly 100% and there would be little to gain from a better algorithm in the same situation.
2 A simple illustration
As an illustration of the problem, consider how to count the occurrences of a particular letter (e.g. 'A') in a sequence of letters forming some text. If 'A' has a direct representation in the sense that an element is active when and only when 'A' occurs (as on a keyboard) then it is easy to count the occurrences of 'A' with precision. But if 'A' is represented by a distinctive pattern of active elements (as in the ASCII code) then the problem is to infer from counts of usage of individual elements whether and how often 'A' has occurred. The ASCII code is compact, with 127 keyboard and control characters distinguished on just 7 bits. Obviously 7 usage counts cannot in general provide enough information to infer 127 different counts precisely. The result is underdetermined except for a few simple cases. In general there is only a statistical relation between the occurrence of letters and the usage of their representational elements, and our problem is to calculate, for cases ranging from direct representations to compact codes, how much variance is added when inferring these frequencies.
Note that 7 specific subsets of characters have a 1:1 relation to activity on individual bits in the code. For example, the least significant bit is active for a set including the characters (ACEG..) as well as many others. Such subsets have special significance because the summed occurrences of events in them is easily computed on the corresponding bit. In the ASCII code they are generally not subsets of particular interest, but in the brain it would be advantageous for them to correspond to categories of events that can be grouped together for learning. This would improve generalisation, increase the sample size for learning about the categories, and reduce the consequences of overlap errors. Our analysis ignores the benefit from such organisation and assumes that the representations of different events are randomly related, though we discuss this further in section 6.2.
The conversion of directly represented key presses into a distributed ASCII code is certainly not advantageous for the purpose of counting characters. The events that the brain must count, however, are not often directly represented at an early stage, nor do they occur one at a time in a mutually exclusive manner as do typed characters. Each event may arouse widespread and varied activity that requires much neural computation before it is organised in a consistent form, suitable for counting and learning. We assume here that perceptual mechanisms exploit the redundancy of sensory messages and generate suitable inputs for our models as outlined below and discussed later (Section 6). These simplifications enable us to focus on the limitations of counting accuracy that arise even under relatively ideal conditions.
3 Formal definition of the task
Consider a set of Z binary cells (Fig. 1) on which is generated, one at a time, a sequence of patterns of activity belonging to a set {P1..PN} that correspond to N distinct categorisations of the environment described as events {E1..EN}. The patterns (binary vectors) are said to represent the events. Each pattern Pi is an independent random selection of Wi active cells out of the Z cells, with the activity ratio (i =Wi/Z. The corresponding vector {xi1..xiZ} has elements 1 or 0 where cells are active or inactive in Pi. The active cells in two different patterns Pi, Pj overlap by Uij cells (Uij(0), where
EMBED Unknown EMBED Equation.3
Note that two different events may occasionally have identical representations, since these are assigned at random.
Consider an epoch during which the total numbers of occurrences {mi} of events {Ei} can be treated as independent Poisson variables with expectations {mi}. The totals M and mT are defined as M=Si(mi) and mT=Si(mi). The task we define is to estimate, using only plausible neural mechanisms, the actual number of occurrences (mc) of representations of an individual event Ec when this event is identified by a test presentation after the counting epoch. We suppose that the system can employ an accurate count of the total occurrences (M) summed over all events during the epoch, and also the average activity ratio during the epoch:
= Si (mi ai) /M (3.1)
We require specific and neurally plausible models of the way the task is done, and these are described in the next two sections. The first model (section 3.1) is based on modifiable projections from the cells of a representation. They support counting by increasing effectiveness in proportion to presynaptic usage, though associative changes or habituation might alternatively support learning or novelty detection with similar constraints. The second model (3.2) is based on changes of internal support for a pattern of activation. This greatly adds to the number of variables available to store relevant information by involving modifiable synapses between elements of a distributed representation, analogous to the rich interconnections of the cerebral cortex.
Readers wishing to skip the mathematical derivations in sections 3.1, 3.2 should look at their initial paragraphs with Figs. 1,2 and proceed to section 4.
3.1 The projection model. This model (Fig.1) estimates mc by obtaining a sum Sc of the usage, during the epoch, of all those cells that are active in the representation of Ec. This computation is readily carried out with a single accumulator cell X (Fig. 1) onto which project synapses from all the Z cells.
The strengths of these synapses increase in proportion to their usage. When the event Ec is presented in a test situation after the end of an epoch of such counting, the summed activation onto X gives the desired total:
Sc = Scells k (xck Sevents j (xjk mj) )
= mcWc + Sevents j(c (mj Ujc) (3.2)
If there were no interference from overlap between active cells in Ec and in any other events occurring during the epoch (i.e. if mj=0 for all j for which Ujc>0), then Sc = mcWc. In this situation, Sc/Wc gives a precise estimate of mc and is easily computed since Wc is the number of cells active during the test presentation of Ec. In general Sc will be larger than mcWc due to overlap between event representations. An adjustment for this can be made on the basis of the total number of events M and the average activity ratio , yielding a revised sum S'c:
S'c = Sc  M Wc (3.3)
Expansion using equations 3.1,3.2 yields:
S'c = mcWc(1ac) + Sevents j(c (mj (Ujc  ajWc)) (3.4)
The expectation of each term in the sum in equation 3.4 is zero, since (Ujc(=ajWc and the covariance for variations of mj and Ujc is zero since they are determined by independent processes. An unbiased estimate c of mc is therefore given by:
c = S'c / (Wc (1ac)) (3.5)
To calculate the reliability and statistical efficiency of this estimate c we need to know the variance of S'c due to the interference terms in equation 3.4. This is simplified by the facts that mj and Ujc vary independently and that (Ujc(= ajWc :
Var(S'c) = Sj(c ((mj(2Var(Ujc) + Var(Ujc) Var(mj)) (3.6)
Ujc has a hypergeometric distribution, close to a binomial distribution, with expectation ajWc and variance aj(1aj)(1ac)(11/Z)1Wc. Substituting these values and (mj(=Var(mj)=mj for the Poisson distribution of mj we obtain:
Var(S'c) = Wc Sj(c ( aj(1aj)(1ac)(11/Z)1( mj + mj2) ) (3.7)
Note that this analysis includes two sources of uncertainty in estimates of mc : variation of the frequencies of interfering events around their means, and uncertainty of the overlap of representations of individual interfering events. The overlaps between different representations are strictly fixed quantities for a given nervous system, so this source of variation does not contribute if the nervous system can adapt appropriately. Results of calculations are therefore given for both the full variance (equation 3.7) and for the expectation value of (Varm(S'c)( when {Ujc} is fixed, i.e. for variations of {mj} alone. This modified result is obtained as follows. Instead of equation 3.6 we have the variance of S'c (equation 3.4) due to variations of {mj} alone :
Varm(S'c) = Sj(c ( (Ujc ajWc)2 mj) (3.8)
This depends on the particular (fixed) values of {Ujc}, but we can calculate its expectation for randomly selected representations:
(Varm(S'c)( = Sj(c ( ((Ujc2(  2ajWc(Ujc( +aj2Wc2) mj )
= Wc Sj(c (aj(1aj)(1ac)(11/Z) mj ) (3.9)
This expression is similar to equation 3.7, omitting the terms mj2. Note that if mj<<1 for all events that might occur, the difference between the two expressions is negligible. This corresponds to a situation where there may be many possible interfering events but each one has a low probability of occurrence. The variance does not then depend on whether individual overlaps are fixed and known, since the events that occur are themselves unpredictable.
The relative variance r (equation 1.1) for an estimate c of a desired count is obtained by dividing the results in equations 3.7, 3.9 firstly by the square of the divisor in equation 3.5 (Wc (1ac))2, and secondly by the expectation of the count mc. Square brackets are used to denote the terms in equation 3.7 due to overlap variance that are omitted in equation 3.9:
EMBED Unknown EMBED Equation.3 (3.10)
3.2 The internal support model. In the projection model the stored variables correspond to the usage of individual cells. The number of such variables is restricted to the number of cells Z, and this is a limiting factor in inferring event frequencies. The second model takes advantage of the greater number of synapses connecting cells, which in the cerebral cortex can exceed the number of cells by factors of 103 or more. The numbers of pairings of pre and post synaptic activity can be stored by Hebbian mechanisms and yield substantially independent information at separate synapses (GardnerMedwin, 1969). The number of stored variables for a task analogous to counting is greatly increased, though at the cost of more complex mechanisms for handling the information.
The support model (Fig.2) employs excitatory synapses that acquire strengths proportional to the number of times the pre and post synaptic cells have been active together during events experienced in a counting epoch. Full connectivity (Z2 synapses counting all possible pairings of activity) is assumed here in order to establish optimum performance, though the same principles would apply in a degraded manner with sparse connectivity. During test presentation of the counted event Ec the potentiated synapses between its active cells act in an autoassociative manner to give excitatory support for maintenance of the representation (GardnerMedwin, 1976, 1989). The extent of this internal excitatory support depends substantially on whether, and how often, Ec has been active during the epoch. Interference is caused by overlapping events, just as with the projection model though with less impact because the shared fraction of pairings of cell activity is, with sparse representations, much less than the shared fraction of active cells.
Measurement of internally generated excitation requires appropriate external handling of the cell population (Fig. 2). In principle the whole histogram of internal excitation onto different cells can be established by imposing fluctuating levels of diffuse inhibition along with activation of the pattern representing Ec. Our analysis requires just the total (or average) excitation onto the cells of Ec (Equation 3.11, below). The neural dynamics may introduce practical limitations on the accuracy of such a measure in a given period of time, so our results represent an upper bound on performance employing this model.
We restrict the analysis for simplicity to situations with equal activity ratios for all events (aj (a ; Wj (W) and full connectivity. Each of Z cells is connected to every other cell with Hebbian synapses counting pre and post synaptic coincidences, including autapses that effectively count individual cell usage. Analysis follows the same lines as for the projection model (section 3.1) and only the principal results are stated here, with some steps omitted.
When Ec is presented as a test stimulus, the total excitation Qc from one cell to another within Ec is given, analogous to equation 3.2, by:
Qc = mcW2 + Sevents j(c (mj Ujc2) (3.11)
A corrected value Q'c is calculated to allow for average levels of interference:
Q'c = Qc  M a W ( a W + 12a )(11/Z)1 (3.12)
This has expectation equal to mcW2 (1a) (1+a2/Z)(11/Z)1 so we can obtain an unbiased estimate of mc as follows:
c = Q'c / (W2 (1a) (1+a2/Z)(11/Z)1) (3.13)
The full variance of Q'c is calculated as for S'c (equation 3.6), taking account of the independent Poisson distributions for the numbers of interfering events {mj} and hypergeometric distributions for the overlaps {Ujc}:
EMBED Equation.3 (3.14)
The algebraic expansion of Var(Ujc2) is complex, but is simplified with the terminology F(r)=(W!/(Wr)!)2(Zr)!/Z! :
EMBED Equation.3 (3.15)
The relative variance for the frequency estimate c (equation 3.13) can then be written as :
EMBED Unknown EMBED Equation.3 (3.16)
where EMBED Equation.3 is the expected total number of occurrences of interfering events, EMBED Equation.3 is the average number of repeats of individual interfering events, weighted according to their frequencies, and x is given by:
EMBED Equation.3 (3.17)
x depends on both W and Z, but for networks of different size (Z) it has a minimum value for an optimal choice of EMBED Unknown EMBED Equation.3 that is only weakly dependent on Z: EMBED Unknown EMBED Equation.3 = 3.9 for Z=10, 6.7 for Z=100, 8.7 for Z=1000, 9.9 for Z = 105 . The corresponding optimal activity ratio, to give minimum variance and maximum counting efficiency with this model, is therefore EMBED Unknown EMBED Equation.3 .
4 Results for events represented with equal activity ratios
Analysis for the support model (3.2) was restricted, for simplicity, to cases where all activity ratios are equal. In this section we also apply this restriction to the projection model (3.1) to assist initial interpretation and comparisons. The relative variance for the projection model (equation 3.10) becomes:
EMBED Unknown EMBED Equation.3 (4.1)
Both this and the corresponding equation 3.16 for the support model can be broken down as products of a representationdependent term ((Z1)1 or xZ2 ) and a term that depends only on the expected frequencies of the counted and interfering events. The latter term is the same for both models and we call it the interference ratio ( Fc ) for a particular event Ec :
EMBED Equation.3 (4.2)
The interference ratio expresses the extent to which a counted event is likely to be swamped by interfering events during the counting epoch. The principal determinant is the ratio of occurrences of interfering and counted events: EMBED Equation.3 . The term in square brackets expresses the added uncertainty that can be introduced when interfering events occur with multiple repetitions ( EMBED Equation.3 >>1), because overlaps that are above and below expectation do not then average out so effectively. As described after equation 3.7, stable conditions may allow the nervous system to adapt to compensate for a fixed set of overlaps, corresponding to omission of this term; it is in any case negligible if interfering events seldom repeat ( EMBED Equation.3 ).
We can see how Fc governs the relative variance of the count by substituting in equations 4.1, 3.16 for the two models:
EMBED Unknown EMBED Equation.3 (4.3)
EMBED Equation.3 (4.4)
If the number of cells (or synapses, for the support model) is much less than the interference ratio (Z<<Fc or Z2 << x Fc ), then r>>1 and counting efficiency is necessarily very low. High efficiency (>50%) requires r <1 and networks that are correspondingly much larger and very redundant from an information theoretic point of view (see Section 6.1). For a particular number of cells Z, 50% efficiency for an event Ec requires that Fc < Z or Z2/x on the two models. For the support model the highest levels of interference are tolerated with sparse representations giving minimum x, with a approximately 1/((2Z) and Fc < 0.10.25 Z2 (Section 3.2).
If we set a given criterion of efficiency, the maximum interference ratio Fc scales in proportion to Z for the projection model and approximately Z2 for the support model. This is consistent with what one might expect given that the numbers of underlying variables used to accumulate counts are respectively Z and Z2 in the two models, though the performance per cell in the projection model is up to 10 times greater than the performance per synapse in the support model.
Fig. 3 illustrates the dependence of efficiency on the number of cells Z, for an interference ratio Fc =100 and a=0.1. When Z exceeds 100 (=Fc), the efficiency exceeds about 50% on the projection model and 93% on the internal support model. The advantage of the second model (based on counts of paired cellular activity) results from the fact that the proportion of pairings shared with a typical interfering event (a2 ) is less than the proportion of shared active cells (a). Efficiency with the projection model is independent of a, while with the support model it is maximised by choosing a to minimise x in equation 4.4 (see above).
Simulations were performed using LABVIEW (National Instruments) to confirm some of the results in the foregoing analysis. Fig. 4A,B shows simulation results for conditions for which Fig. 3 gives the theoretical expectations (N=101 equiprobable events, m=10, Fc=100, a=0.1, Z=100). The observed efficiencies are in agreement (see legend to Fig. 4) and the graphs illustrate the extent of correspondence between estimated and true counts. The horizontal spread on these graphs shows the Poisson variation of the true counts about their mean (m=10), while the vertical deviation from the (dashed) lines of equality shows the added variance due to the algorithms. The closeness of the regression lines and the lines of equality (ideal performance) shows that the algorithms are unbiassed, while the efficiency is the mean squared error divided by m (almost equivalent to the squared correlation coefficient r2). Performance is better for the support model (Fig. 4B) than the projection model (Fig. 4A) and is worse (Fig. 4C) if adaptation to fixed stochastic conditions and overlaps is not employed. The interference ratio (Fc: equation 4.2) rose for Fig. 4C to Fc=1089 with the same number of events handled by the network. With just 10 events (substantially fewer than the number of cells), Fc was restored to 100 and the efficiency to 50% as predicted (Fig. 4D).
The theoretical dependence of efficiency on a, Z and Fc for representations having uniform a is shown in contour plots in Fig. 5A,B for the projection and support models respectively. Contours of equal efficiency (for Fc =100) are plotted against a and log(Z/Fc). For the projection model (Fig. 5A) the efficiency is independent of a and depends only on the ratio Z/Fc (or, more strictly, (Z1)/Fc : equation 4.3). This graph would not be significantly different for any larger value of Fc. For the support model (Fig. 5B) the efficiency is higher then for the projection model (Fig. 5A) for all combinations of Z/Fc and a, especially if a is close to the optimum level of sparseness corresponding to the contour minima (a EMBED Unknown EMBED Equation.3 ). Higher interference ratios (Fc >100) yield even greater efficiencies with a given value of Z/Fc, while the benefits of sparse coding with the support model become more pronounced.
Some of the implications of these plots are illustrated here with an example. Suppose initially that events are represented by 12 active cells on a population of 30 (a=0.4, Z=30). With an interference ratio Fc=100, the counting efficiency would be 22% on the projection model and 45% on the support model (points marked a on Figs. 4A,B). These are modest efficiencies, but changing the representations with the same event statistics can improve the efficiency.
Representing events with more sparse activity on the same number of cells, as might be achieved by simply raising activation thresholds, can improve counting efficiency with the support model, though not with the projection model. Representation with 4 instead of 12 active cells out of 30 (a=0.13) gives efficiencies of 22%, 63% on the two models (points b). Though efficiency is improved with the support model, note that information may be lost by this encoding since there are only about 104 distinct patterns with 4 active cells out of 30, compared with 108 having 12 active. A better strategy is to use more cells. For example, recoding to 5 active cells on 100 we get 50%, 93% efficiency on the two models (points c), and with 4 active on 300 we get 75%, 98% (points d). Each of these encodings retains about 108 distinguishable patterns, so incurs no loss of information. The improvement in counting efficiency is achieved at the cost of extra redundancy, with in this example a tenfold increase of the number of cells.
These levels of efficiency fall short of the 100% efficiency that can be achieved if cells are assigned to direct representations for each event of interest. 100 cells would be enough (without duplication or spare capacity) to provide direct representations for 100 events, which is the largest number that can simultaneously have interference ratios Fc=100. If these 100 cells are used instead for a distributed repesentation for these 100 events, then there would be a moderate loss of efficiency to 50% or 93% on the two models. The merits of distributed and direct representation are considered further in the discussion, but these results suggest that if distributed representations are to be used for efficient counting then just as many cells may be required as for direct representations. However, their ability to represent rare and novel events unambiguously gives them greater flexibility even in relation to counting.
5 Events with different probabilities and different activity ratios
The interference ratio Fc is higher for rare events than for common events, since the factor that varies most in proportionate terms in equation 4.2 is the denominator mc. If, as assumed in the last section, all event representations have the same activity ratio a, this means that rare events (with probability less than about 1/Z on the projection model) are counted inefficiently because the overlap errors act like a source of intrinsic noise. Though only relatively few events (at most Z or Z2/x) can simultaneously be counted with ( 50% efficiency, many of the huge number with distinct representations on a network may occur, but too rarely to be countable. Since the relative frequencies of different events are not stable features of an animal's environment, however, an infrequent event in one epoch may be counted efficiently in different epochs when it occurs more frequently (see Discussion, section 6.4).
The poor counting efficiency for rare events may also be boosted if they are represented with higher activity ratios than other events. This requires that particular events be identified as worthy of such amplification, through being recognised as important, or simply novel. Conversely, lowering a for common or unimportant events will be beneficial. The mechanisms that may vary activity ratios are discussed later (Section 6.3). The results are illustrated in Fig. 6 for a simple example using the projection model, based on equation 3.10. Calculations are for 7 events with different frequencies, firstly with equal activity ratios a=0.02 (squares) and secondly with activity ratios inversely proportional to frequency (crosses), giving more uniform efficiency. The way in which rare events benefit from higher a is that the high probability that individual cells will be active for other interfering events is compensated by the pooling of information from more active cells. Experimentation with different power law relationships a=kmn required n in the range 1.0 to 1.3 to give approximately uniform efficiency over a range of conditions, with n~1.0 when all values of a are <<1 . Though these results are presented only for the projection model, it is clear that qualitatively similar conclusions must apply for the support model.
6 Discussion
Counting events, or determining their frequencies, is necessary for learning and for appropriate control of behaviour. This paper analyses the uncertainty in estimating frequencies that arises from sharing active elements between the distributed representations of different events.
To analyse this problem we make substantial simplifying assumptions. We treat neurons as binary, with different events represented by randomly related selections of active cells. We treat the frequencies of interfering events as independent Poisson variables. And lastly, we employ just two simple models as counting algorithms, with numbers of physiological variables equal to the numbers of cells in one case and synapses in the other.
Sensory information contains a great deal of associative statistical structure that is absent from the representations we model. Our results would require modification for structured data; but it has long been argued, with considerable supporting evidence (Attneave 1954; Barlow 1959, 1989; Watanabe 1960; Atick 1992), that a prime function of sensory and perceptual processing is to remove much of the associative statistical structure of the raw sensory input by using knowledge of what is predictable. The resulting less structured input would be closer to what we have assumed for our analysis. Notice, however, that inferencelike processes are involved in unsupervised learning and perceptual coding, and that these will be influenced by errors of counting in ways similar to those analysed here.
Our aim in the discussion is to identify simple insights and principles that are likely to apply to all processes of adaptation and learning that depend on counting. There are five sections: the counting problem in general; five principles for reducing the interference problem; possible applications of these principles in the brain; relative merits of direct and distributed representations; and the problem of handling the huge numbers of representable events on a distributed network.
6.1 The counting problem in distributed representations. Counting with distributed representations nearly always leads to errors, but the loss of efficiency need not be great if the representations follow principles outlined in the next section. We shall see that the capacity of a network to count different events (and therefore to learn about them) is generally enormously less than its capacity to represent different events.
The interference ratio Fc , defined in equation 4.2, gives direct insight into what limits counting efficiency. At its simplest this is the ratio of the total number of interfering events (of any type) to the number of events being counted over a defined counting epoch. In our first (projection) model, the number of cells employed in the representations (Z) must exceed Fc if the efficiency is to remain above 50%; in the second (internal support) model fewer cells may suffice for equivalent performance, provided the number of synapses (Z2) is at least 410 times Fc (equation 4.4). Increasing the number of cells increases efficiency, and with either model, 3 times as many cells or synapses are required to achieve 75% efficiency, and 9 times as many for 90% efficiency. Multiple repetitions of individual interfering events tend to amplify the errors due to overlap and raise these requirements further (equation 4.2).
In a situation where all events are equally frequent, the interference ratio is just the number of different events, and this number is limited, as above, to roughly Z or Z2/10 for 50% efficiency. In the more general case where events have different probabilities or activate different numbers of cells, their counting efficiencies will vary; roughly speaking, it is those with the largest products of activity ratio and expected frequency (aimi ) that have the greatest counting efficiency (section 5), and the number that are simultaneously countable is again limited to roughly Z or Z2/10. Events that are rare and sparsely represented cannot be efficiently counted, though if their probability rises they may become countable in a different epoch (Section 6.4).
The number of simultaneously countable events is dramatically fewer than the 2z distinct events that can be unambiguously represented on Z cells, and it scales with only the first or second power of Z, not exponentially. This parallels longestablished results on the storage and recall of patterns using synaptic weight changes, where the number of retrievable patterns scales between the first and second power of the number of cells, with the retrievable information (in bits) ranging from 8% to 69% (with extreme assumptions) of the number of synapses involved for different models (e.g. Willshaw et al., 1969; GardnerMedwin, 1976; Hopfield, 1982). Huge levels of redundancy are required in order simultaneously to count or learn about a large number (n) of events, compared with the minimum number of cells (log2(n)) on which these events could be distinctly represented. For Z=n, as required either for direct representations, or for distributed representations with 50% counting efficiency on our projection model, the redundancy defined in this way is 93% for n=100 rising to 99.9% for n=104. This problem will reemerge in various guises in the following sections.
6.2 Principles for minimizing counting errors. Our results lead to the following principles for achieving effective counting in distributed representations.
1. Where many events are to be counted on a net, use highly redundant codes with many times the number of cells needed simply to distinguish these events.
2. Use sparse coding for common and unimportant events, but raise the activity ratio for events that can be identified as likely to have useful associations, especially if the events are rare.
3. Minimise overlap between representations, aiming for overlaps less than those of the random selections of active cells that we have assumed for analysis.
These principles arise directly from the analysis of the problem as we have defined it, namely to count particular reproducible events on a network in the context of interfering events represented on the same network. Two more principles arise from considering how representations can economically permit counting that is effective for learning.
4. Representations should be organised so that counting can occur in small modules, each being a focus of information likely to be relevant to a particular type of association. Large numbers of events would then be grouped for counting into subsets (those that have the same representation in a module).
5. The special subsets that activate individual cells should, where possible, be ones that resemble each other in their associations. Overlap between representations should ideally mirror similarities in the implications of events in the external world.
These extra principles can help to avoid the necessity of counting large numbers of individual events, making learning depend on the counting of fewer categorisations of the environment, and therefore manageable on fewer cells. They can lead to appropriate generalisation of learning, and can make learning faster since events within a subset obviously occur more often than the individual events.
6.3 Does the brain follow these principles? One of the puzzles about the cortical representation of sensory information lies in the enormous number of cells that appear to be devoted to this purpose. A region of 1 deg2 at the human fovea contains about 10,000 sample points at the Nyquist interval (taking the spatial frequency limit as 50 cycles/deg), and the number of retinal cones sampling the image is quite close to this figure. But the number of cells in the primary visual cortex devoted to that region is of order 108. Some of the 104 fold increase may be explained by the role of the primary visual cortex in distributing information to many destinations, but this cannot account for all of it and one must conclude that this cortical representation is grossly redundant. The selective advantage that has driven the evolution of cortical redundancy may be the necessity for efficient counting and learning, as encapsulated in our first principle.
In relation to the second principle, evidence for sparse coding was put forward by Legndy & Salcman in 1985, and it was clear from the earliest recordings from single neurons in the cerebral cortex that they are hard to excite vigorously and must spend most of their time firing at very low rates, with only brief periods of strong activity when they happen to be excited by their appropriate trigger feature. Field (1994), Baddeley (1996), Baddeley et al (1997), Olshausen & Field (1997), van Hateren & van der Schaaf (1998), Smyth et al (1998), Tolhurst et al (1999), Tadmor & Tolhurst (1999) and others have now quantitatively substantiated this impression.
Counting accuracy for a particular event benefits from the sparse coding of other events (reducing their overlap and interference), not from its own sparse coding. Sparsity is especially important for common events because of the extent of interference they can cause, and because it can be afforded with little impact on their own counting, which is already quick and efficient. The benefits of sparsity are greatest for counting based on Hebbian synaptic modification (as in our support model and a related model for direct detection of associations: GardnerMedwin & Barlow, 1992). Significant events (particularly rare ones) may need higher activity ratios to boost their counting efficiency at the expense of others. We envisage that this might be achieved if selective attention to important and novel events favours the representation of more features of the environment through lowering of cell thresholds and the acquisition of more detailed sensory information. Adaptation and habituation on the other hand can raise thresholds, favouring the required sparse representation of common events. This flexibility of distributed representations is attractive, for in biological circumstances the probability and significance of individual events is highly context dependent.
Overlap reduction, as advocated in our third principle, follows to some extent from any mechanism achieving sparse coding. But there are also indications that wellknown phenomena like the waterfall illusion and other aftereffects of pattern adaptation involve a "repulsion" mechanism (Barlow 1990) that would directly reduce overlap between representations following persistent coactivation of their elements (Barlow & Fldik 1989; Fldik 1990; Carandini et al 1997). Similar aftereffects offer possible mechanisms for improving the longterm recall of overlapping representations of events, through processing during sleep (GardnerMedwin, 1989).
The fourth principle fits rather well with the organisation of the cortex in modules at several scales. The great majority of interactions between cortical cells are with other cells that are close by (Braitenberg and Schz, 1991), so the smallest module might be a minicolumn or column, such as are found in sensory areas (Edelman & Mountcastle, 1978). These would be the focus of the statistical structure comprising local correlations. Each pyramidal cell receives many (predominantly local) inputs that effectively define its module, as the connections to the accumulator cell define the network in our projection model (Fig.1). The outputs then go to other areas of specialisation, for example area MT as a focus for the spatiotemporal correlations of motion information. Optimal organisation of a large system must involve mixing diverse forms of information to find new types of association (Barlow, 1981), as well as concentrating information that has revealed associations in the past, and this seems broadly consistent with cortical structure.
The trigger features of cortical neurons often make sense in terms of behaviourally important objects in the world surrounding an animal, suggesting that the brain exploits the advantages expressed in the fifth principle. For instance, complex cells in V1 respond to the same spatiotemporal pattern at several places in the receptive field, effectively generalising for position. The same is true for motion selectivity in MT, and perhaps for cells that respond to faces or hands in in inferotemporal cortex (Gross et al, 1985). Such direct representation of significant features (Barlow 1972, 1995) assists in making distributed representations workable.
6.4 The relative merits of counting with direct and distributed representations. Maximum counting efficiencies for events represented within a network or counting module would be achieved with direct representations, but this requires prespecification of the patterns to be detected and counted. In contrast, a distributed network has the flexibility to represent and count, without modification, events that have not been foreseen. Our results show that provided these unforeseen representations have no more than chance levels of overlap with those of other events (as in our models), good performance can be achieved on a network of Z cells with almost any limited selection (numbering of order Z or Z2/10) of frequent events out of the very large number (of order 2Z) that can be represented. Infrequent but important events can be included within this selection by raising their activity ratios.
The potential for unambiguous representation of a huge number of rare and unforeseen events means that distributed networks can be very versatile for counting purposes in a nonstationary environment. Learning often takes place in short epochs during which some type of experience is frequent  for example learning on a summer walk that nettles sting, or enlarging one's vocabulary in a French lesson. A system in which there is gradual decay of interference from previous epochs, reasonably matched in its timecourse to the duration of a significant counting epoch, can in principle allow efficient counting of any transiently frequent event that is represented by any one of the patterns that the network can generate. This can assist in learning associations from short periods of intense and novel experience, though of course there may be hazards in generalising such associations to other periods.
Once direct representations are set up for significant events, many such events might in principle be simultaneously detected and counted, through parallel processing. This cannot occur where events have overlapping distributed representations: one cannot have two different patterns simultaneously on the same set of cells. Thus there is a trade off between the flexibility of distributed representations in handling unforeseen events, and the constraint that they can only handle them one at a time. Since events of importance at the sensory periphery are not generally mutually exclusive, it seems necessary to use distributed representations in a serial rather than a parallel fashion by attending to one aspect of the environment at a time. This is a significant cost to be paid for the versatility of distributed representations, but it resembles in some degree the way we handle novel and complex experience.
6.5 Managing the combinatorial problem. Huge numbers of high level feature detectors are required by some models of sensory coding based on direct representations the the so called grandmothercell or yellowvolkswagen problem (Harris, 1980; Lettvin, 1995). The ability of distributed representations to represent a vastly greater number of events than is possible with direct representation is often thought to permit a way round this problem, but our results show that economy is not so simply achieved. For counting and learning, distributed representations have no advantage (or with our support model, rather limited advantage) over direct representations, because the maximum number that can be efficiently counted scales only with Z or Z2 rather than 2Z. Where distributed networks are used for representing larger numbers of significant events, counting on subsets of events (our fourth principle) may permit an economy of cells. This economy relies, however, on it being possible to separate off into a relatively small module the information that is needed to establish one type of association about an event, while other modules receive information relevant to other associations. A combination of economical representation of events on distributed networks with the more extravagant use of cells required for counting may depend, for success, on a property of the represented events, namely that their associations can be learned and generalised from a separable fraction of the information they convey.
The combinatorial problem arises in an acute form at a sensory surface such as the retina, since impossibly large numbers of patterns can (and do) occur on small numbers of cells. Receptors that are not grouped close together experience states that are largely independent in a detailed scene, and for just 50 receptors many of the 1015 distinct events may be almost equally likely (even considering just 2 states per cell). This means it is out of the question to count peripheral sensory events separately, and it would probably be impossible to identify even the correlations due to image structure without the topographic organisation that allows this to be done initially on small sets of cells. The considerable (10,000fold) expansion in cell numbers from retina to cortex (Section 6.3) allows for the counting of many subsets of events, but it would not go far towards efficient counting of useful events without using a hierarchy of small modules each analysing different forms of structure to be found in the image.
6.6 Conclusion Our analysis suggests ways in which distributed and direct representations should be related, and it has implications for understanding many features of the cortex. These include the expansion of cell numbers involved in cortical sensory representations, the extensive intracortical connectivity and its modular organisation, the tendency for trigger features to correspond to behaviourally meaningful subsets of events, phenomena of habituation to common events, alerting to rare events and attention to one thing at a time. Distributed representation is unavoidable in the brain but may cause serious errors in counting and inefficiency in learning unless guided by the principles that we have identified.
References
Atick, J.J. (1992) Could information theory provide an ecological theory of sensory processing? Network, 3 213251.
Attneave, F. (1954) Informational aspects of visual perception. Psychological Review, 61 183193.
Baddeley, R.J. (1996) An efficient code in V1? Nature (London), 381 560561.
Baddeley, R., Abbott, L.F., Booth, M.C.A., Sengpiel, F., Freeman, T., Wakeman, E.A. & Rolls, E.T. (1997) Responses of neurons in primary and inferior temporal visual cortices to natural scenes. Proceedings of the Royal Society, Series B, 264 17751783.
Barlow, H.B. (1962) Measurements of the quantum efficiency of discrimination in human scotopic vision. Journal of Physiology (London), 160 (169188),
Barlow, H.B. (1959). Sensory mechanisms, the reduction of redundancy, and intelligence. In The Mechanisation of Thought Processes (pp. 535539). London: Her Majesty's Stationery Office.
Barlow, H.B. (1972) Single units and sensation: a neuron doctrine for perceptual psychology? Perception, 1 371394.
Barlow, H.B. & Reeves, H.B. (1979) The versatility and absolute efficiency of detecting mirror symmetry in random dot displays. Vision Research, 19 783793.
Barlow, H.B. (1981) Critical limiting factors in the design of the eye and visual cortex. The Ferrier lecture, 1980. Proceedings of the Royal Society, London, B, 212 134.
Barlow, H. B. (1989). Unsupervised learning. Neural Computation, 1, 295311.
Barlow, H.B. (1990). A theory about the functional role and synaptic mechanism of visual aftereffects. In C.B.Blakemore (Eds.), Vision: coding and efficiency Cambridge, U. K.: Cambridge University Press.
Barlow, H.B. (1995). The Neuron Doctrine in Perception. In M. Gazzaniga. (Eds.), The Cognitive Neurosciences (pp. Chapter 26 pp 415435). Cambridge, Mass: MIT Press.
Barlow, H.B. & Tripathy, S.P. (1997) Correspondence noise and signal pooling in the detection of coherent visual motion. Journal of Neuroscience, 17 79547966.
Barlow, H.B. & Fldik, P. (1989). Adaptation and decorrelation in the cortex. In R. Durbin, C. Miall & G. Mitchison (Eds.), The Computing Neuron (pp. 5472). Wokingham, England: AddisonWesley.
Braitenberg, V. & Schz, A. (1991). Anatomy of the Cortex: statistics and Geometry. Berlin: SpringerVerlag.
Carandini, M., Barlow, H.B., O'Keefe, L.P., Poirson, A.B. & Movshon, J.A. (1997) Adaptation to contingencies in macaque primary visual cortex. Proceedings of the Royal Society, Series B, 352 11491154.
Churchland, P.S. (1986). Neurophilosophy: towards a unified science of the mindbrain. Cambridge, Mass.: MIT Press.
Edelman, G.E. & Mountcastle, V.B. (1978). The Mindful Brain. Cambridge, Mass: MIT Press.
Farah, M.J. (1994) Neuropsychological inference with an interactive brain: a critique of the "locality" assumption. Behavioural and Brain Sciences, 17 43104.
Field, D.J. (1994) What is the goal of sensory coding? Neural Computation, 6 559601.
Fisher, R.A. (1925). Statistical Methods for Research Workers. Edinburgh.: Oliver and Boyd.
Fldik, P. (1990) Forming sparse representations by local antiHebbian learning. Biological Cybernetics, 64 (2), 165170.
GardnerMedwin A.R. (1969) Modifiable synapses necessary for learning. Nature 223 9169
GardnerMedwin, A.R. (1976) The recall of events through the learning of associations between their parts. Proceedings of the Royal Society B, 194 375402.
GardnerMedwin, A.R. (1989) Doubly modifiable synapses: a model of short and longterm autoassociative memory. Proceedings of the Royal Society B, 238 137154
GardnerMedwin A.R. & Barlow H.B. (1992) The effect of sparseness in distributed representations on the detectability of associations between sensory events. Journal of Physiology, 452 282P
Gross, C.G., Desimone, R., Albright, T.D. & Schwarz, E.L. (1985). Inferior temporal cortex and pattern recognition. In C. Chagas, R. Gattass & C. Gross (Eds.), Pattern Recognition Mechanisms (pp. 165178). Vatican City: Pontificia Academia Scientiarium.
Harris, C.S. (1980). Insight or out of sight? Two examples of perceptual plasticity in the human adult. In C. S. Harris (Eds.), Visual Coding and Adaptability (pp. 95149). New Jersey: Laurence Erlbaum Assoiciates.
Hinton, G.E. & Anderson, J.A. (1981, updated 1989). Parallel models of associative memory. Hillsdale, NJ: L Erlbaum.
Hinton, G.E., McClelland, J.L. & Rumelhart, D.E. (1986). Distributed representations. In D. E. Rumelhart, J. L. McClelland & P. R. Group (Eds.), Ch 3 in Parallel distributed processing: explorations in the microstructure of cognition (pp. 77109). Cambridge, Mass: MIT Press.
Hopfield J (1982): Neural networks and physical systems with emergent collective computational properties. Proc Natl Acad Sci USA 79:25542558
Jones, R.C. (1959) Quantum efficiency of human vision. Journal of the Optical Society of America, 49 645653.
Legndy, C.R. & Salcman, M. (1985) Bursts and recurrences of bursts in the spike trains of spontaneously active striate cortex neurons. Journal of Neurophysiology, 53 926939.
Lettvin, J.Y. (1995). J Y Lettvin on grandmother cells. In M. S. Gazzaniga (Eds.), The Cognitive Neurosciences (pp. 434435). Cambridge, Mass.: The MIT Press.
Olshausen, B.A. & Field, D.J. (1997) Sparse coding with an overcomplete basis set: a strategy employed by V1? Vision Research, 37 33113325.
Page, M. (2000) Connectionist models in Psychology: a localist manifesto. Behavioural and Brain Sciences, IN PRESS
Rose, A. (1942) The relative sensitivities of television pickup tubes, photographic film, and the human eye. Proceedings of the Institute of Radio Engineers, 30 293300.
Rumelhart, D.E. & McClelland, J. (1986). Parallel distributed processing (Three volumes). Cambridge, Mass.: MIT Press.
Smyth, D., Tolhurst, D.J., Baker, G.E. & Thompson, I.D. (1998) Responses of neurons in the visual cortex of anaesthetized ferrets to natural visual scenes. Journal of Physiology, London, 509P 5051P.
Tadmor, Y. & Tolhurst, D.J. (1999?) Calculating the contrasts that mammalian retinal ganglion cells and LGN neurones see in natural scenes. Vision Research, under revision
Tanner, W.P. & Birdsall, T.G. (1958) Definitions of d' and h as Psychophysical measures. Journal of the Acoustical Society of America, 30 922928.
Tolhurst, D.J., Smyth, D., Baker, G.E. & Thompson, I.D. (1999) Variations in the sparseness of neuronal responses to natural scenes as recorded in striate cortex of anaesthetized ferrets. Journal of Physiology, London, 515P 103P.
van Hateren, F.H. & van der Schaaf, A. (1998) Independent component filters of natural images compared with simple cells in primary visual cortex. Proceedings of the Royal Society, Series B, 265 359366.
Watanabe, S. (1960) Informationtheoretical aspects of Inductive and Deductive Inference. I.B.M. Journal of Research and Development, 4 208231.
Willshaw D.J. Buneman, O.P. & LonguetHiggins, H.C. (1969) Nonholographic associative memory. Nature 222 960962.
Figures
EMBED Word.Picture.8
Fig. 1. Outline of the projection model. Sets of binary cells (for example, those marked by ellipses) are activated when there are repeatable sensory or neural events. The frequency of occurrence of a particular event Ec (with its active cells black) is estimated from activation of an accumulator cell X when Ec is represented. Synaptic weights onto X are proportional to the usage of individual cells. Additional circuitry (not shown) counts the number of active cells Wc and estimates both the average number of active cells and the total number of events (M) during the counting period.
EMBED Word.Picture.8
Fig. 2. Outline of the internal support model. Internal excitatory synapses within the network measure the frequency of cooccurrences of activity in pairs of cells by a Hebbian mechanism. On representation of the event Ec to be counted (indicated by the hatched active cells), the total of the internal activation stabilising the active pattern is estimated by testing the effect of a diffuse inhibitory influence on the number of active cells.
Fig. 3. Counting efficiency as a function of the number of cells (Z) used for distributed representations assumed to have the same activity ratio (a) for all events. Calculations are for Fc=100, i.e. 100 times more total occurrences of interfering events than of the counted event. The full lines are for a=0.1 with the projection model (heavy line) and support model (thin line) and for a=1/(2Z) for maximum efficiency using the support model (dotted line). Note that if all events are equiprobable, implying that there is a total of just 101 different event types occurring, then 7 cells would suffice to represent them distinctly if there were no need for counting. High counting efficiency requires ten times this number, and the corresponding ratio would be higher for larger numbers of events.
Fig. 4. Estimated counts plotted against actual counts, obtained by simulation with the projection model (A,C,D) and the support model (B). The network had 100 cells (Z=100) with 10 active during events (W=10, a=0.1), selected at random for each of 101 different events for A,B,C and 10 for D. The numbers of occurrences of each event were independent Poisson variables with mean m=10. New sets of representations and counts were generated to calculate each point. Estimated counts used the algorithms either with (A,B) or without (C,D) adaptation to the actual overlapping of representations of interfering events with the counted event. Interference ratios (Fc: equation 4.2) were therefore 100 for A,B,D and 1089 for C. Perfect estimates would lie on the line of equality (dashed). The full line shows the linear regression of estimates on actual counts. Efficiency e is shown, calculated as the mean squared error for the 100 points, divided by m; this was not significantly different (in 5 repeats of such simulations) from efficiencies expected from the theoretical analysis (equations 4.3, 4.4, 1.2) which were A:50%, B:93.3%, C:8.3%, D:50%.
Fig. 5. Plots showing counting efficiency (indicated on contours) as a function of activity ratio a on the horizontal axis (assumed equal for all events), and log10 of the factor by which the number of cells Z exceeds the interference ratio Fc (vertical axis). Plots are for the projection model (A) and the support model (B), calculated for Fc=100, though plot (A) is essentially identical for all larger values of Fc, apart from the cut off at low a corresponding to the requirement W(1. Points ad are referred to in the text.
Fig. 6. The effect of nonuniform event frequencies on counting efficiency. The number of active cells (A) and the counting efficiency (B) are shown for 7 events with widely differing average counts during a counting epoch (horizontal axes). Two different assumptions are made: events are represented each with the same number of active cells (), or with a number inversely proportional to the event probability (). In each case the mean value of a (weighted with probabilities of occurrence) is 0.02. Calculations are for the projection model with Z=400 cells and full variances (equation 3.10). Rare events have lower counting efficiency when all representations have equal activity ratios, but approximately uniform efficiency is achieved () when more active cells are used to represent rare events.
Definitions
The network is a set of Z binary cells. An event, relative to the network, is any stimulation that causes a specific activity vector (pattern of W active and ZW inactive cells). This vector is the representation of the event. Repeated occurence of a representation implies repetition of the same event, even if the external stimulus is different.
A direct representation of an event contains at least one active cell that is active in no other event. The cell/s that directly represent an event in this way have a 1:1 relation between their activity and occurrences of the event.
All other representations are distributed representations: each active cell is active also in other events, and identification of an event requires interaction with more than one cell. Compact distributed representations employ relatively few cells to distinguish a given number (N) of events, close to the minimum Z=log2(N).
The activity ratio of a representation is the fraction of cells (a =W /Z) that are active in it. A sparse representation has a low activity ratio. Direct representations are not necessarily sparse, though they must have extreme sparseness (W=1) to represent the greatest possible number of distinct events (Z) on a network.
Overlap between 2 representations is the number of shared active cells (U).
Counting of an event means estimating how many times it has occurred during a counting epoch. Counting accuracy is limited by overlap and by the interference ratio (equation 4.2), which in simple cases is the total number of occurrences of interfering events (i.e. different from the counted event) divided by occurrences of the counted event.
Box 1. Definitions.
Principal symbols employed
Z number of binary cells in the network
Pi A binary pattern (or vector) of activity on the network
Ei An event causing the pattern Pi (its representation)
N the number of such distinct events that may occur with finite probability during a counting epoch
Wi number of active cells in the representation of Ei
ai =Wi /Z activity ratio for the representation of Ei
Uij overlap (i.e. number of shared active cells) between Pi, Pj
mi expectation of the number of occurrences of Ei in a counting epoch
mi actual number of occurrences of Ei
EMBED Equation.3 total number of event occurrences within the counting epoch
V variance introduced in estimating a count m
r= V/ m relative variance, i.e. the variance of the estimate of m relative to the intrinsic Poisson variance of m (=m)
e= m /(m+V) efficiency in estimating m, given the variance V in counting a sample
Fc Interference ratio while counting Ec (equation 4.2)
<y> The statistical expectation of any variable y
{yi} The set of yi for all possible values of i
EMBED Equation.3 An estimate of y
Box 2. Principal symbols employed.
page PAGE 19
B
A
Expected no. of occurrences of individual events
Cells
Active
No. of
100.0
10.0
1.0
0.1
1000
100
10
1
Efficiency
Expected no. of occurrences of individual events
100.0
10.0
1.0
0.1
1.0
0.5
0.0
d
c
b
a
d
c
b
a
A
B
a
a
F
Z
c
log(
)
0.1
0.1
0.2
0.2
0.3
0.4
0.5
0.5
0.6
0.7
0.8
0.8
0.9
0.9
0.9
2
1
0
1
2
0.6
0.4
0.2
0
0.1
0.1
0.2
0.3
0.3
0.4
0.4
0.5
0.5
0.6
0.6
0.7
0.7
0.8
0.8
0.9
0.9
2
1
0
1
2
0.6
0.4
0.2
0
4{(<
=
>
L
8@u A#X#%%(([(\(]())********x,z,,,.04///"/003M3o4p4';H;;;;CJH*CJmH CJH*CJOJQJ
jCJ
jmCJmH 6CJ5CJ
5CJH*5CJ
5>*CJ56
56>*CJH5{=
M
Wj] \#%`(dxxxd&d*$dx&d
0Sj $*$dhx&d@&
0Sj *$dhx&d
0Sj 5{=
M
Wj] \#%`(*+,H00p47';H;==>CCFGHIFJJMMMXN@PPzR¿}zwҽ9Z03Taz$.>_`.`(*+,H00p47';H;==>CCFGHIFJJMMMXN
*$x
Sj x*$x
0Sj ;;0<1<4<5<<<<<<<<<<====U=V=================??:?@@@$A&AAAfChCCCCCCĲjCJEHUjRCJEHUCJOJQJjCJEHUjiL<
CJUVmHCJEH
jCJU
jCJ
jaCJCJCJH*FCCCCCCCCCGGGGGG/H0HIIIIJJJJJJJ0J4J:JJRJTJVJ\JbJdJtJvJzJ~JJJJ(K*KKKKKKKKKLLLLLL:L;LZL[LLLLLLLLLdMeMMMMMMMMMMj4CJEHU
jCJH*5CJCJOJQJCJCJH*TMMMMMMMMNNNNNN N$N(N*N0N4N:NN@NBNNNNNNNNNNNHOJOVOZOPPPPPDPFPHPRPTP^PbPhPjPlPQQQdQfQRR R&R^R`RbRfRhRlRnRpRrRtRRjCJEHUjCJEHUjCJEHU
jCJ
jCJ
jCJH*CJOJQJCJH*CJj CJEHUIXN@PPzRRTFUPYYZ.[[$_b5bAe]ikn/o@ppRq>rr2tVttt
*$x
Sj RRRRRRRRRRRRRRRRRRRRRSSSSSSSSSSSSSSSSTTFTHTJTLTNTZT\T`TbTdTTTTTTTTTTTTTTTTTTUU
UUU"U$U&U(U,U.U0U2U4U6UUUWWWWCJH*
jCJ
jCJ
jCJH*CJOJQJCJCJH*VzRRTFUPYYZ.[[$_b5bAe]ikn/o@ppRq>rr2tVtttMuuwXwuyyz{}~܂8ݦyvnI3W3.qѣhlMϦ]1ɲl6g&V[.WWWWWWWXXXX:YrBrDrFrPrRr\r`rfrnrrrtr~rrrrrssss,t.t2t4t5tHtItJtKtUtjCJEHUmHj;
CJUVmHjCJUmHCJmHjCJEHUCJH*OJQJ
jCJH*CJH*CJOJQJCJCJH*EUtvtxtyttttttttttttttuu u[u\uiuu}u~uuuuuuuuuuvvvvvvwxqCJOJQJjR(CJEHUj6X:
CJUVmHj<&CJEHUjx/;
CJUVmHjZ#CJEHUj;
CJUVmHCJEH
jCJUj CJEHUjCJEHUmHj(;
CJUVmHjCJUmHCJmH6CJCJH*CJH*CJ)tMuuwXwuyyz{}~܂8ݦF*$x@&
Sj
*$x
Sj www@wBwDwFwXwZwx xx@xAxBxCxXxhxixvxxxxxxxxByOyPy]ypyqyrysyuyyz̼̫̼̊{̛̼od`5CJj4CJEHUmHj ?<
CJUVmHCJH*ju1CJEHUmHj:
CJUVmHCJEHCJmHj4/CJEHUmHj%?<
CJUVmHCJEHjCJUmHCJOJQJCJj+CJEHH*OJQJUjN?<
CJUVmHjCJH*OJQJUCJH*OJQJ%zzz{{{{{{
P}t}z}}~}}}}}}}}}~~Ђ҂Їznj_:
CJUVmHjc?CJEHOJQJUjl:
CJUVmHjCJOJQJUj<CJEHUjd:
CJUVmHjF9CJEHUj=;
CJUVmHCJH*6CJCJOJQJCJH*jW6CJEHUj;
CJUVmHCJCJEH
jCJU*҂Ԃւڃ 8:<bdfhtTVXdhntv68̇·
(*LN`bd~8:<ʊ̊>úî
jCJjHCJEHH*UjH<
CJUVmHjCJH*UCJH*jECJEHUj;
CJUVmHCJEHCJH*CJOJQJCJ
jCJUj#BCJEHU<>BRTV~fhޑBD̚ΚКҚh"$& "$lnjlnPRΟПҟZ`CJmHCJH*mHCJOJQJmHCJH*CJH*CJOJQJCJ6CJT`bhjlhjPQF"$&\^ҸҸҸҬҬҬҸҨҸ
jCJCJH*5CJCJH*mHCJH*mHCJOJQJmH6CJmHCJH*CJOJQJCJmHjVKCJEHUmHj虫:
CJUVmHCJCJEHjCJUmH>Fp'`"LJ5;5(,87Lg
~{xuއ+ڈy@?hsKOBڝg<B۵+]UP?*Q*:}^.hj$&*,.02FH@Ba
T]HJL89'U,.CJEHCJH*CJH*CJOJQJ6CJCJmH CJ5CJ6CJmH6CJH*mHCJH*mHCJOJQJmHCJmHIp'`"LJ5;x*$x
Sj
*$x
Sj xxJK5^;#$23v x ,<j
pxyz)*&'*7)O@ADLRZx
;CJ>*CJmH 6CJmH >*CJCJmHCJOJQJCJmH 6CJCJH*CJ5CJP5(,87Lg
t& x*$
Sj @
*$x
Sj xx
t&XklmACT;<?+I
!x!!!!!!"
""""""#####8$W$b$$%%%%8%g%%&C&D&H&&&&(6>*6CJ>*CJCJ>*CJmH 6CJmH CJmH W
t& xTJ
!!
""^##b$%%R&&(B*++,, ,<,...o06>>bC¿~{xurk
&r'rrsstxu%vvhwxxyyjzzj{~}~ˀ#WQǃƅj, xTJ
!!
""^##b$%%R&&(B*++,xx((>(((((*5*6*:**+++k+++++ ,,,,, ,!,8,9,:,;,<,C,vw..........//p0ƿΦ
j*>*CJ6CJCJ>*CJmH 6CJmH CJmH CJOJQJmH 1,, ,<,...o06>>M*$
Sj Eƀ+[&*$
Sj 1$d
Sj dx
Sj
p0q0x022P2R2T2<3>33333p4r4666688V9X9;;;;;<.=0===>>>>@@@@A A"AAAAB~BBBBC
CbCdCfCtCFFFFFF&I(IIѿѽOJQJj5UhmHnH5
jCJCJH*6CJ5CJCJCJOJQJCJH*mHCJOJQJmHCJmH5CJmHj5CJUmHC>bCII;K$LjMPRPQQQQvvv
*$
Sj @$*$x
Sj @*$x
Sj @*$1$
Sj @M*$
Sj Eƀ+[&bCII;K$LjMPRPQQQQRBR{RRSnTTUUxVVWnXXHYYY"Z0Z2Z3Z4Z5Z6Z7Z8Z9Z:Z;ZZ?Z@ZAZBZCZDZEZFZGZHZIZJZKZLZMZNZOZPZQZRZ¿
&X_
Qf !5"hQ?IIIIII JJJJ=KCKBLMLLLdMeMnMMNNN
NFNRNP
PRPZPPPPPCQUQQQQQR RCRDRdReRkRyRRRSSTTTT
TTTT0ThTjTpTtTxTTTTTTTTTTUVUUUUUUUU
jCJU
6CJH*>*CJCJOJQJCJH*6CJ
5>*CJCJhmH nHRQRBR{RRSnTTUUxVVWnXXHYYY"Z4Z5Z6Z7Z8Z&`#$
*$
Sj @
*$
Sj UUUUVVVVVV\W^WWWWWWWWWWWXXnXpXrXXXLYNYjYpYYYYYYYYY"Z#Z/Z0Z2Z3Z4ZzZZ}ZZZ෬ਡ5OJQJ0JmHnH0J
j0JU>*CJjiCJEHUmHje:
CJUVmHjCJUmHCJmHCJH*CJOJQJ6CJCJ
jCJUjgCJEHUj$:
CJUVmH48Z9Z:Z;ZZ?Z@ZAZBZCZDZEZFZGZHZIZJZKZLZMZNZOZPZQZRZSZTZUZRZSZTZUZVZWZXZYZZZ[Z\Z]Z^Z_Z`ZaZbZcZdZeZfZgZhZiZjZkZlZmZnZoZpZqZrZsZtZuZvZwZxZyZzZZ}ZZZZZZZZZZZZZZZZZZZZZZZZZZZZZ/[0[6[7[<[=[A[B[F[G[K[L[P[Q[U[V[X[Y[[[\[^[_[a[b[d[e[g[h[j[k[dUZVZWZXZYZZZ[Z\Z]Z^Z_Z`ZaZbZcZdZeZfZgZhZiZjZkZlZmZnZoZpZqZrZrZsZtZuZvZwZxZyZzZZ}ZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZ.[0[5[7[;[=[@[B[E[G[J[L[O[Q[T[V[X[Y[[[\[^[_[a[b[d[e[g[h[j[k[m[n[o[p[q[r[t[\\\
\\\\\\OJQJB*OJQJmH B*CJOJQJmH OJQJ5j:l5U5OJQJ5CJOJQJB*CJOJQJmH 5B*CJOJQJmH HZZZZZZZZZZ/[0[6[7[<[=[A[B[F[G[K[L[P[Q[U[V[X[Y[[[\[\[^[_[a[b[d[e[g[h[j[k[m[n[q[r[t[u[\\
\\\\\\\\(\*\.\k[m[n[q[r[t[u[\\
\\\\\\\\(\*\.\0\8\:\B\D\L\N\V\X\`\b\j\l\t\v\~\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\]]]]]]]]$]&].]0]8]:]B]D]L]N]V]X]`]b]j]l]t]v]~]]]]]]]]]]]e\\\\\$\&\(\*\,\.\0\6\:\@\D\J\N\T\X\^\b\h\l\r\v\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\]]]]]]]"]&],]0]6]:]@]N]T]X]^]b]h]l]r]v]]]]]]]]B*mH OJQJB*CJOJQJmH B*OJQJmH 5
56B*mH V.\0\8\:\B\D\L\N\V\X\`\b\j\l\t\v\~\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\]]]]]]]]$]&].]0]8]8]:]B]D]L]N]V]X]`]b]j]l]t]v]~]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]>^?^>*CJB*mH ]]]]]]]]]]]]]]]]]]]]]]]]]^^^^^^^^^ ^
^^^
^^^^^^^^^^^^^^^^^^^ ^!^"^#^$^%^&^'^(^)^*^+^,^^.^/^0^1^2^3^4^5^6^7^8^9^:^;^<^=^>^?^X]]]]]]]]]]]]]]]]]]]]^^^^^^^^^ ^ ^
^^^
^^^^^^^^^^^^^^^^^^^ ^!^"^#^$^%^&^&^'^(^)^*^+^,^^.^/^0^1^2^3^4^5^6^7^8^9^:^;^<^=^>^?^
*$
Sj @ 00&P P. A!"#S$%....()()))()()@=6<u+o2.CiW[CM̛;?:<x}UՕZ*L[m/DAP!/@B y'<ȃ@&%K,jZVQ}jL;N뛙oZg}9@rᖝwgݳk>{(ǥr"
y&MDWǧ%}xDoHa]fQF$qY/K7^]O"泺YR3ԡl}<\>n=KlRW9,]"slyb^[4g5g~o_{yؼ'Z>Cng:88$p~Inn_kgyukο8>"لsHuq.~Lw;p'~v%wc?'7D6#,Sac%/X*"#7؋b^%H^ySŕلkU&f`CnyYr
}R.`J?!l3ɰ%RvǕ_!˽>,~ggXJCrp_Vg ZE.ßGzo2_W*JsOm&߈p+J9C\?g"Q9!ϒ_,
swWyv&.Fm 6)َM[wi5hFz%!K/4GT)U"x `Q_dQsOm.Εoǒ ́]e*f:ğ6[[ZlV~lOC~~YYYd~9ğwK}OE}0qmkNn%߅#Ob?y̋lE;L5g2w+?S?߸un31Gk8U_Ge/nj4Wut?2CȮ?7~rxcy@̿Qv!l^p_cCYբF5=W':/o/5).9.eJ`5Rv\.tpolyByJXV+.gu8uTaoӥ2y~HGTzm*?zܺ4n~kM][GQ~k7NoihI$%7!]wuêJjįJK}!סGd{/T}C~Fjg4gpgm嵒?d?ތaacW.%J3?Wg L%]7OoihER"%ץq5(t'A'O'8d~\<~[%ٽTMT\LVo]z~W/Cqz}?nUX_Ugo2_o.uiTmgS&
jF>!&̇slߥj$~i(WX_.gI]c80t_%r?.2JwYv?OOlO%Oɶ4iM<ۯd;ğ><<y@ɶ<%_
l%\WìYVYd{ʥJo[a1}]lo}g/6_/lۿK+ˁU.J!?̐!??+sύ܌vAɶp{umY﹄:xgGOkc7S5}{giimH/)_\RױHuϕpl_䍿uWf _g^}qj/SuOBu1~j'G+&snsm{õ^z{kt'~MN^{b䥡?O{bm'oҙ^UOr<{U~ :,\?)ֿ=wXW7ǞS_\vv<mu_Jd#6p5y}gFpKᵱ}`_y/:C m66zyp˜mR@;+k"vwvzDPǥr>"h'B}:}{Rciu6zou:r0:szw)_?
IT Y ^$x{GW
OL_I{a.㸇$倝g\f ;QgReJWSZLj=Ge5[54_O5Lg~Y">/}@~gcB:N[WM?5qRCV#cMx\^u]T'ɷoҾk/[>#ZK[p`@\/7nGp6};w
+O>~g+[X[ y^
.c<&0oa}SK
dD[ӟ[389p~q/{7 kwu>sHu>W#1}ݗ{35٢L@k7A&DTdZgX>.)Adi\sMjjԌDΥΛH}ZÿEjUblyX7eTZB>Pm2bUHvAK 6qcLn=~q6n;۠^hBy*g͕^J}hYY$JكxGlܖy^t) 9C&]縱q;t=)8}VqRChzVS[b/_/=_LYyZ)}Cx3*.?'ϗsٷ;1)ZOʈqJ0f@Og=9_sPv4)
JSklc>z `?ʶ9so/Y{n=/U^h,G*?^&[<;ĿA/7lzav[)>J&N59X1TG\6I?? 8?G`8yq`#t1Eɚ?"\_4;@NC ;Sv~ޓܡȝe[yO(j_){??I)?Ӂ~Wc,s__mE{Jޒ[KWozMF,4Ev0zS ~lOC~g?٤'
T{Zó;'
wB
+}%/k
Drx]0"/?!m`$\x%Log0lr?_cw
{T~t?\.+l!&/~@:~}u_(َ_8ğf7_4sysB`
JX7T<q "z(߆ͿR/(O"v}hߘ۾? ܇N7O5?H?Nd;<o[l_jPcb4Oks0~>Za8Qߥu&i??Wɶ?/AJ5?ByJ~eyy33HߓOC^d<Ǟi
\ߪd:ğㇺXb9bV ".!obߠtY7h}]Xßc!#o/##
q?j<Ɠ7;ުx{.sxoozv_uio<omFo2oKloK*_i
^AK&Ks9~i:,`<,%˥xg!O
Fɶ/D(JtO^{ϱAl矣_ǯWp5#~(ٿK?_x}f{v*L/?[=}(?G6^(ǯz
ǯjr2T_<ХOφsD%Otp1<r?\)a;#QqGҁK'uN%NKx`ZMEzҠ9J9 6װQ?.?uu#gU:!X}/g꺃KewA=by}uo~gH>uiN9>cD)#":sC=>cybiבNʫ{XeiMK:}'GKFyRWμL;B8q=Dݮ#_3]HHJ7EsA9?O,ud='GZNPA{7\k"{ Ş)ږD_#o{p44c}o:Fs}.Yבso5ym
tv~&#Dmw*3BoN𗂿{wAׁGo'ς3"Ϻ{Sϗog}@v+\&VkvO9ΙCZD}s sH{ߌ$g϶`Iue}.>DP`aYuU>C^Gt({0}S,
VV
V .\$X#X+X'X/XAQIYp`RemWv^R}}).'Dokwȷ/7v;dD}^Bo^ѐ'@,v@\K8>vs<~ثeD2[~?Dp~[5}~~4p^>\l/R/px~Chn1☊cSh#ݍh{Z+p$X#egQ7rՔ@k(֚i!/7yM4۹b)qĵjD\:]S19l%Ȟ*+,eRwA
ZHR~Kljn"jCK鲝8%:ȗ(9ȧN*4e7<]/e7Pt)B)R.J݈;5]v?Oq(5eqzf/`>p+e_FDڣ
2p]89q\%e@Wя:~bvwߺ;"oGR>gܯBYZ=Sdxsgs?0.>1!?NWУq!Wˮt}hX>[5':C[r_>a1e!ee?۷;8cozJ~,*oWKbfL4~9!Y ؿl%?!l?b?zzslO,i~~i[a?J!ks,.dsuNX3!?k
?Rcw~NCgE
n?"}qcUܥd_?
^&v(S9qS!o@ɯe_RxG9ğu~4pt=ZɶsN:lAjEقIa/r++įt6?ߥ0b97II3I*ٶTswT_Yi;9h7
e3/R%cOW43Nga?`?lO,&%wSg!NWc^4/Sbl~ۅ>q)OCVD>w.?;nςǫ>@^#Y6'E>/%8ğJjMO
O'D
XSd8ğ$k?OCiJuڢ/KB/b%]~gJM477A}
7D3~osBzv>ml`zzλbw4Pگ?"R`o[%9tRH̦JΡE
gbN@4sc?S`?Ӂt%Rmi&'#d;:LDcqjoX#]a]?Ub?u~a?ŰZ%S0X_d;ĿY6M 6*َ<qFoFlpia;?9>ǭ},id7''OfOCy6?J7o/)7~~C~ƊL0s%sC9Η?,\`oGGa!T:07LoZZߥ7!?̓!j^kbP/ͿwWpU䜯 >?Pl)ϥƐ}
l?>~#>a>
Jg=cHMJOC9,t܍gp&iJi/ \6/5_/_Hao%G%?#>d;C9~V(C8F%FG$~ʟ">0˗?/`9mb?Y~a?OmqmL?z.JA3eǵ$~uՆ
k!ݒ;W/WaJ!$LQe)b"cj`ZɶW;ۤG=衯G߽&NT~]4ܟ_)W?l<>پz(MNd4[ҕlߥ5!Vc'N%XDF/R"o݊_Wz6)]_Eߩd[\?!`?Wm&?G6ў?K͇NY[4`PCOǲoC/_P7?
BIao3W[{;v}*o?fAJ_KF ~V\z yCx
ש^?^?'IJOrÒi!=.z>8j~%q?> \(FO~Slbm'B=ʗaͱJ##uFcKFG븖<]ۄNQm⭑\/&Kfoys]B!&~=6 jybֶh֠Im6t>{@j;&`6 H/P"kh5iWxoSl{NWv[Xbo@:P6_t&<<&oο!oyo+Π4&ˀorԹob+]?*F]'
4 md#"'#B.mJ]q'K_bM7P<l.as8e.ڽ"!M~ƐDcg:&p~MiiW_eg%p~Kہ9VemEy@~!Fuޓ߷FGB8݃?杣j:^Bܗu.d?r(=8:hr>)w(.}zۥl6݃>f*UVSIr?oŴz^o&\mU+WNS#/#o.
)ۄwZT^Jnx.EJC
x"o)qxF0^F=!eRnKLYWy/)gR&#K,{y)$C8+=#=ϔy$>w10͆
'֒M/gx8)2 Xϗceyhn'B6I y&'
n#'i,r[;4;a?Z;Oa߇wey{Dr~u}）[*@?Ӿ\:v]
j?O}}a.oi0{
FoT?E7/rC[ɛﾒ&mmJ绷9ĿYM9̟יkU}v,͗]M7b_M˕lX[E~f{mJpI?&S&)?Ϫ Y[gv ܃z5^+`C{IyTPH}je{VɸŽ!_<_s%4煆_ȳgS+8@TT5~zUl_Is?JyhTgbOSd~^ʓIy' NRl~%a{fKS+dgSm?/3MP/G]
}g,j+~c'oaK>;3IOWac^R4oWY~H¥>oa?lAK
߿Fe>w)`I0I*ٶTDFގ
}/v?y_G+`5JCemTݮd;oU6VwK` r`r%K?j0H)Rd]'^섿/WfB_%W&=~FG~dƯs?{p%vz?V%\x7=_d??ɸý;,@U~c^_U?/CJ_5@}1m_?/eU[[3K7oùo"
{`lfOnOr$~~KlXW>TI?u6ϥ:fq?3oo(gx=φq
}>~oye;ۻ_Pax3Bv7*?sUW~KlW8zmisDIjZbZ%'FcJ?d_#qWWN:E
7D?I)ӁPt%3Gs/H߿hGDСdECkdܓ<Dɶ]h}~;vjR&spWDJ.5`6WCy~~GI??{][x!LzL<&!by^,fa8?'.ԒypW)T9?yo(q'\w]aOc\پn֞>@CۯGx6_>cD
k]}cik.ܠkqo}
nzxO<7k/F
>'xByO
~SE%m~W{/"7U?MG!c7ߺׂjnښD%m㼽G
2Vv9\>:{JƁn<ή0H{p1zOߣ{}zzjo?gdsR@7ݗi06筁#uP3xMUz8z]<&9:BCۣfUȻ膟5Xo^獗Dk4uLtz%ϙ"Qc!l{¼?m=jX`j@]j_],$9=jQsj/)}.N"Qqy!oދ[\y@}!A~egۣߣ9'5OB/C_=!
so4Q#Tr4^y/h:Zx qxwdmF72{#">4JFS\?*'NX>c{?(%)_.k@=Ux'cXTTgQJw7}+#x\
ŕ=3 ʌLCwԩ_%bDeG
Q@DI$@
Fwף`D1GʪQ#qAbpWwwߵ_O+I={뻷z]'"I@##~Qm賈"zl\9Q[2
s%0@G(+U3;t"j09+0l舿SBO9MbӔE/+ؔTRm}'lKGڣvMֆ?)=h1iڶ~Յrs!'R<~&tQ}'fKpbIyhO/i9!߁x\8COU"U%}cRz;&AL*Q)2?$*懘^?Ngn[h_ퟄm"X9%cX(S7cDiTO!;[=@Zۈ{ף>zȪь?xGͭhkqhUO=#jA!bP0Q[ҬoGR/R5:Q]TEݢt[(^2uT`EJr?It ]3h7]ڤ\x/]B@N3v/q9p{9f
z
EX}^ڤ!o=GP~rG^j 3)ߠm{{=BKP{=Lڤ4K!ov)=rs^;
1C
Q+~;~>Cyw>&`M~@Q{J76C*o5n¿[f&q+
Xbo!z#re>v[v[J.`a֝cabcឰE(kCYNQ{/FzJF"_i!v#]t`MK?S>+_Q]HVb7vƅMtF9r\>@c#BOi8Fd#6?<4#z(Qk<L%ję &ipCޓt=_
,y+!YG(Cm_^NyS_z /ܓ(nu=Pwƛ\&rAm4VNv"Gm!W{5ʢx]֎!,P׀{Fې!*hݐ8*Ga&Q>QZ֕*:T[bW;ޣvFW!ΣvAW{;>v}:NEN5CT`'ky*o0r={s=d1!oqvAcrb'#n1%MWyS3.h
U,p5yW+s]Xbň[`'k\Tip+B*q<d!o[vA{
qwx8kC:9<QŮGF;YcCȻOm@}vAcSτM`'klNیvAC>z=y8IyO );]7`gx9/x8!wvv7q{=Dx8WVpkDi`k*;F%r=tpk
ry8ήqrx87ήq:ry87ή19c<t7ήq!rx8!n]c&rp5"n]jNq=ː=;]fĭpk9k=tή`kMvvpk9{8O#n]<`kW=
;]pk?z8BA;P9N8q=#E< 8;]D5x85sq
x85NEvP
pk=t7ή19x87ήq)r.p5"n]c>rp5Cvvo#gqnp]Wocؤߊo_7ݡ?`?_c(3ODŋ/<_x0UoU7*_
/O5ŋ/)^o>s{9n'+^Yk+HſYT_xG{*^_ŋ/m9/r~~~vTL/1ï2*n
mfa]ekb^_eZ{~~G~'~g_c_n5_c_o
5n~#dż3???
?37M]Ͽ_}$ŋs//Pz~x~^^_')^=wWz~#fG]ow~s͋<+C}B{t%c9c11*iͤiyO~N`kvv
{;}?R*kE^oҟoFh,Uۻsm>Hm
WxfO!N;IҾkh/}r/}[_ӧ}Jv_zl`@?;KZ.skmBµ?Ҷ=/\E}NX
ڤhLR1kXX먑Q2؞cVg}槩e',+Yۃ/<_qR{3g ל8M'Js*)cE?v{{g^ok۾fBq9t7q}+:ǏOmC*vT8J^`UqYܧE_ŭy^2旖ȏJ;*Ȭ#2Nlf7~÷3WU@
eË)ےx<yRcb2={6żɘRƲ>q(﮵xZ']<_Ql']Jwa!m<%ߵI:rrNm1aF<)ϣèBW?_x^P//~*)EQ1xMEIǠp:F>Bs>:!knwfj}59Cyi=6g%Ǵ[#){QAwLkocڋssQ9#grP(rH(,
ec(
PPʑѡ<'cB96PPNPI MmQ8!+*ki1n3KN4µOѸy^169>xoCc+B7pl#ۍ3g
Zna}pNhC)=]8ڗUʏ9?*6ἤ5%o'~]:$im
i)j+xo'[בݥ#C.xRDdB
SA?23?*4kD`!3?*4k`0^x}JAg.1;0?higIlph/$R)ւalӊ93Ye
`!xl:
Q,8:RQ]5yX7G,,@"(Dg'ǃ>=RoIETF'$5tb#JN;:N垘xoX%z?gMz3
AxTQ&$Hy~ۚᦵS~.07
pOu$+AFM#.ʰ̇j[5N[o+9+rv{.VB80)^GO#'Y9"JֆAA']%~?qqDdB
SA?2
v&@_6Ep!)RS/![\%
v&@_6ELxxxeO=A}3>>BBGT:ՉDᐸ:?p?D$N{+dC*MW0N>BBGT:ՉDᐸ:?p?D$N{+dC*MW0N>BBGT:ՉDᐸ:?p?D$N{+dC*MW0N>BBGT:ՉDᐸ:?p?D$N{+dC*MW0NB0rЈHYDoHԙ=e*GS(!~,ٗd"I5{˩LW0#yvqB0rЈHYDoHԙ=e*GS(!~,ٗ
!"#$%&'()*+,./0123456789:;<=>?@ABCDEFGHIJKLMNOPQRSTUVWXYZ[\]^_`abcdefghijklmnopqrstuvwxyz{}~
!"#$%*Q./0123457689:<;>=@?ACBDEFGHIJKLNMORPSTUrVsXYZ[\]^_`abcdefghijklmnopquvwxyz{~Root Entry
~ FB$3fTf,, Data%'(+,./0
56789:;<,n@WordDocumentHIJKLMNOP}UVWXYZ[\]:`ObjectPoolghjklmpPefTf_1011640828FPefPefOle
CompObjfObjInfo
!$'(),/2569<=>?@ABEHKPSVYZ[\_bcdefghilorwz}
FMicrosoft Equation 3.0DS EquationEquation.39qր0II
Uij
=( xik
xjk
)k=1,Z
"Equation Native _1011640527'kF@f@fOle
CompObj
f
FMicrosoft Equation 3.0DS EquationEquation.39qL,vII
(2m c
)=1(Z"1)( j
(1"j
)(j
[+j
2
]))j`"c
"
c
(1"ObjInfo
Equation Native h_1006160638@F0L
f0L
fOle
c
)cP
FMicrosoft Equation 3.0DS EquationEquation.39q IЉI
Var(Q'c
)=Var(Ujc
2
)(j
+j
CompObjfObjInfoEquation Native _1006161704E,F0f0f2
)j`"c
"zE
FMicrosoft Equation 3.0DS EquationEquation.39q,vII
Var(Ujc
2
)=(F(4)
+6F(3)
+7F(2)Ole
CompObjfObjInfoEquation Native 8
+F(1)
"(F(2)
+F(1)
)2
)
FMicrosoft Equation 3.0DS EquationEquation.39q,vII
(2m c
)=Z2
_1006161904F0f0.fOle
"CompObj#fObjInfo%Equation Native &_992966823sF0.f xfOle
*CompObj +fI
c
[ ( 1 +"i
) ]
FMicrosoft Equation 3.0DS EquationEquation.39qLl~II
I
=jj`"c
"ObjInfo!Equation Native .h_9827349021O$F xf xfOle
0
FMicrosoft Equation 3.0DS EquationEquation.39qxI8I
"i
=)j
2j`"c
"
ICompObj#%1fObjInfo&3Equation Native 4_1010781893Y)F f fOle
7CompObj(*8fObjInfo+:Equation Native ;
FMicrosoft Equation 3.0DS EquationEquation.39qά$vIpI
= F(4)
+6F(3)
+7F(2)
+F(1)
"(F(2)
+F(1)
)2
W4
Z"2
(1"W/Z)2
(1+(W"2)/Z)2
(1"1/Z)"2
FMicrosoft Equation 3.0DS EquationEquation.39q0X~II
W(H"
Z/2_1010764837; .F Zf ZfOle
CCompObj/DfObjInfo0FEquation Native GL_9814504963FffOle
ICompObj24Jf
)
FMicrosoft Equation 3.0DS EquationEquation.39q$,I\oI
min
FMicrosoft Equation 3.0DS EqObjInfo5LEquation Native M@_1010764198T8FffOle
NCompObj79OfObjInfo:QEquation Native RH_10061633706=Ff_ fuationEquation.39q,h~IyI
H"1/
2Z
FMicrosoft Equation 3.0DS EquationEquation.39qOle
TCompObj<>UfObjInfo?WEquation Native X,vII
(2m c
)=1(Z"1) I
c
[ ( 1 +"i
) ]
FMicrosoft Equation 3.0DS EquationEquation.39q_993915565BF_ f"fOle
]CompObjAC^fObjInfoD`m$vIpI
c
=Expected occurrences of events other than the counted eventExpected occurrences of counted eventEquation Native a _982738129"GF"fA%fOle
jCompObjFHkf [ ( 1 +"i
) ]
FMicrosoft Equation 3.0DS EquationEquation.39q84vII
)I
cObjInfoImEquation Native nT_982740114LFA%fA%fOle
p
FMicrosoft Equation 3.0DS EquationEquation.39q l~II
"i
FMicrosoft Equation 3.0DS EquationEquation.39qCompObjKMqfObjInfoNsEquation Native t<_982736849QF'f'fOle
uCompObjPRvfObjInfoSxEquation Native yH,l~II
"i
<<1
FMicrosoft Equation 3.0DS EquationEquation.39qΔ,vII
proje_1006166796VF)f)fOle
{CompObjUWfObjInfoX~Equation Native _1011363327[Fl,fl,fOle
CompObjZ\fction
(2m c
)= c
Z"1
FMicrosoft Equation 3.0DS EquationEquation.39q֘,vII
support
(2m c
)=ObjInfo]Equation Native _984324584x`F.f.fOle
c
Z2
FMicrosoft Equation 3.0DS EquationEquation.39q$`~II
1/
2Z
FMicrosoft Word Picture
MSWordDocWord.Picture.89qCompObj_afObjInfobEquation Native @_1022437050e F'1f'1f1TableW5CompObjdghObjInfoObjectPoolfi'1f'1f
[$@$NormalmH <A@<Default Paragraph Font ,<JZk
KWXY[\]^` ,<JZk
l`a,2$/ˏ>TLD@&_`t&( ^"
B2
31@B2
31?B2
31>B2
31=B
31<x
<1;
x
<1
x
<1:
B
319B2
318B2
317B2
316B2
315B2
314B2
313P@ +!=,
2HB
#1+!+
tBCDEF1=@h!+!=,P@ %!&
1HB
#1%!%
tBCDEF1=@h!%!&\@ W&!'
0T
C1&!&
tBCDEF1\@5!W&!'P@ V'!'
/HB
#1'!'
tBCDEF1=@h!V'!'P@ 6(!(
.HB
#1s(!t(
tBCDEF1=@h!6(!(\@ (!)
#T
!
C1J)!h)
"
tBCDEF1\@5!(!)\@ )!*
&,T
$
C1**!I*
%
tBCDEF1\@5!)!*P@ *!\+
)+HB
'
#1+!+
(
tBCDEF1=@h!*!\+P@ "!x#
,*HB
*
#11#!2#
+
tBCDEF1=@h!"!x#P@ #!Y$
/)HB

#1$!$
.
tBCDEF1=@h!#!Y$P@ $!9%
2(HB
0
#1$!$
1
tBCDEF1=@h!$!9%
3
BCDEF1@`'
4
B
CDEF1
@`&
5
B#CDEF1#
#@`%<B
6@
#1$<B
7@
#1#<B
8@
#1"<B
9@
#1!<B
:@
#1 <B
;@
#1<B
<@
#1<B
=
#1<B
>
#1<B
?
#1<B
@@
#1<B
A@
#1
B
BC$DEF1$@``@ L3$3'
ET
C
C13$3'
D
xBCDEF1R
@`L3'3'@ :.Z#1N$
H
F
BCDEF1@`:.Z#1#
G
xBCDEF1z
@`0#1N$@ 2:$j4%
LN2
I
312:$j4%N
J
31#3c$A4%
K
<1L3$V4%
B2
M
31H2
N
C1B2
O
31H2
P
C1B2
Q
31B2
R
31H2
S
C1B2
T
31
H2
U
C1B
V
31x
W
<1
x
X
<1
x
Y
<1
B
Z
31x
[
<1
x
\
< 1
x
]
<
1
x
^
<1
B
_
31x
`
<1
B
S ?
!"#$%&'()*+,./0123456789:;<=>?@ ''t`t_\ct^Tt]\Kt\3At[c8tZ
 tY eR
tX
(I tW?tV
tU _
JtTC tS8
tRm wb
atQ
tPL
7tOk
tNtME+tLdtH^xtEp tB
&JtAt@ S t?At>a[t=C
t<"%t;B
Et:"h t9bt8X
t7 t6t5
t4&#t3t2ct/t,t) t&t#"t `ttCtDt g
tA* ,)
t
tN 9
tV Kt
ttz
t
\t\tt"t
ttWVt;;B@pGTimes New Roman5Symbol3&Arial"1h_mF&`mFF0argmtony
FMicrosoft Word Picture
MSWordDocWord.Picture.89q
FMicrosoft Equation 3.0DS EquationEquation.39qWordDocumenttSummaryInformation(hj}DocumentSummaryInformation8_1021475577cm Fq3f5fG bjbjَ B]nnn.21111111111$24:2n1*11121S'11114Vn111111n1@rvտ11
Weights
usage
X
Calibration with
no. of active
cells, etc.
Activation of
representation
of an external
event
+
48VZx5B*OJQJhmH nH 56B*CJOJQJhmH nH 56B*OJQJhmH nH
jUmH68XZz
"$&(*,.02468:<>@BDFHJLNPRTVXZ\^`bdfhjlnprtvxz~468VXZxzdN N!"#!$!%Oh+'0T
(4<DLssargmfrgmNormal.dottonyl.d2nyMicrosoft Word 8.0@F#@)Ivտ@ lvտ՜.+,D՜.+,,hp
ucl
Title 6>
_PID_GUIDAN{F06F675324CF11D4AADD0080C83FF91F}1TableKCompObjlohObjInfoObjectPoolnq5f5f
[$@$NormalmH <A@<Default Paragraph Font4FXdt&rsvwxy{}4FXdt
& &&!&%O2$pS9?b7MW r$w #ʪir$;*\w[Uir$plbI"}i@<Z<(
B2
31>B2
31=B
31<B2
31;B2
31:B2
319B2
318B2
317B2
316B2
315B2
314B2
313B2
312B2
311B2
310B2
31/@ D!!*
4.\L D!!!
D!!!HB
#1DW!!X!
tBCDEF1@@b!!!!hL Du'!(
Du'!(T
C1D'!'
tBCDEF1J@M!u'!(\L D&! '
D&! 'HB
#1D&!&
tBCDEF1@@b!&! '\L D%!5&
D%!5&HB
#1D%!%
tBCDEF1@@b!%!5&hL D$!U%
!D$!U%T
C1D$!%
tBCDEF1J@M!$!U%\L D#!`$
$D#!`$HB
"
#1D$!$
#
tBCDEF1@@b!#!`$\L D"!v#
'D"!v#HB
%
#1D+#!,#
&
tBCDEF1?@b!"!v#\L D"!"
*D"!"HB
(
#1DA"!B"
)
tBCDEF1@@b!"!"\L D?*!*
D?*!*HB
+
#1D*!*
,
tBCDEF1@@b!?*!*\L DT)!)
0DT)!)HB
.
#1D)!)
/
tBCDEF1@@b!T)!)hL D_(!(
3D_(!(T
1
C1D(!(
2
tBCDEF1K@M!_(!(P@ &&'(
8HB
5
#1&&'j(T2
6
C1&J(&(T2
7
C1u'&'&P@ %%'U&
<,HB
9
#1&%'+&T2
:
C1%&6&U&T2
;
C1'%' &P@ l$&%`'
@+HB
=
#1$ &V%5'T2
>
C1l$'$`'T2
?
C16%&%K&P@ $(&(
D*HB
A
#1$5(&u(T2
B
C1$(%_(T2
C
C1v&U(&(P@ v%&&(
H)HB
E
#1%&&j(T2
F
C1k&J(&(T2
G
C1v%&%&P@ v% &&'
L(HB
I
#1%@&v&'T2
J
C1V&'&'T2
K
C1v% &%j&P@ &%''
P'HB
M
#1&%''T2
N
C1&'&'T2
O
C1'%' &P@ K%%V'+&
T&HB
Q
#1k%%+'&T2
R
C1K%%%+&T2
S
C1'%V'%P@ V$&V%'
X%HB
U
#1v$&,%'T2
V
C1V$'$'T2
W
C1%&V%&P@ l$'6&J(
\$HB
Y
#1$'&(T2
Z
C1l$'$(T2
[
C1%'6&J(P@ V$ &V''
`#HB
]
#1v$@&+''T2
^
C1V$'$'T2
_
C1' &V'j&@ 6&'6'(
c"`2
a
c$A16&'6'(T2
b
C16&'6'(b@ $&('
g!NB
d
31% &''Z2
e
S1$',%'Z2
f
S1'&(K&@ $%%&
j `2
h
c$A1$%%&T2
i
C1$%%&@ #`'$_(
m`2
k
c$A1#`'$_(T2
l
C1#`'$_(@ V'%K(&
p`2
n
c$A1V'%K(&T2
o
C1V'%K(&B
q
31x
r
<1
x
s
<1
<B
t@
#1B
u
31x
v
<1
x
w
<1
x
x
<1
x
y
<1
B
z
31x
{
<1
x

<1
x
}
< 1
B
~
31x
<
1
x
<1
x
<1
x
<
1
`@ 0'r1*)
T
C11'31(
xBCDEF1U
@`0(r1*)H2
C1
P@ ,&1'
HB
B
#1,&1'
tBCDEF1uj@,'^'P@ ,%1&
HB
#1, %1&
tBCDEF1 j@,%^%P@ ,&1&
HB
#1,@&1&
tBCDEF1@@,&4&P@ ,&1(
HB
B
#1,&1u(
tBCDEF1_v@,'t(P@ &(1t)
HB
B
#1 &(1t)
tBCDEF1Ku@ (t)P@ ,#1&
HB
#1,#1&
tBCDEF1U@,#~V$B
31x
<1
B
31x
<1
B
S ?
!"#$%&'()*+,./0123456789:;<=>&\^Ht>t=
tPO
tPitzI tPS(tSt0ySt:SyHt$tDtstu^tKI~t4it~St}91
t
t{$ 7tzR
tyD_txDJtwD5bjtvD Ututt
(ts
trZtqytp
43tmR
Rtjg^g^tgRtc
t`
t\
stX
StTT
tP
~QtL
tHI'
=tD=1
Ht@
t<\~\t8
31=t4]r
t
tkjt r
}
t
tH H
t
tG<tPJEIt
t JItG<tfetu
ttZt [t^t@'@&p@GTimes New Roman5Symbol3&Arial"hFF!0KtonyargmWordDocumentYSummaryInformation(prDocumentSummaryInformation8_982757924J^uFS8f:fY &bjbjWW Y==@]
H>>>>>aHaHaHqHsHsHsHsHsHsH$bIVK:HaH@faHaHaHHqH>>=>qHqHqHaH>>qHaHqHqHqHqH6qH>*p>÷̿
qHqH
Internal excitatory synapses
count cooccurrences
Test inhibition
consistent with
an active
representation
Calibration with
no. of active
cells, etc.
Activation of
representation
of an external
event

+
?@\^rt!"&B*CJ$OJQJhmH nH B*CJ*OJQJhmH nH 5B*CJOJQJhmH nH 56B*CJOJQJhmH nH
jUmH @]^st !
!"#$%&'()*+,./0123456789:;<=>?@\]^rstd !"#$%
!#$%&N N!"#W $X %zaw #ʪ( ttwwwwwwwwwwwwwwwwza;*\w[U( ttwwwwwwwwwwwwwwwwzaplbI"}( ttݻwwwݻwwwݻwwwݻwwwOh+'0T
(4<DLsstonyfonyNormal.dotargml.d2gmMicrosoft Word 8.0@F#@xV̿@JD·̿՜.+,D՜.+,,hp
ucljTitle 6>
_PID_GUIDAN{30D4F6D23E0811D39739F40E0BC10000}Ole
CompObjtvfObjInfowEquation Native T8~IuI
M=mii
"
FMicrosoft Equation 3.0DS EquationEquation.39q4vII
2y Oh+'0_985017701zF=f=fOle
CompObjy{fObjInfoEquation Native 01TableTBSummaryInformation(DocumentSummaryInformation8
!"#$%&'()*+,./0123456789:;<=>?@ABCDEFGHIJKLMNOPQRSTUVWXYZ[\]^_`abcdefghijklmnopqrstuvwxyz{~d"I5{˩LW0#yvqB0rЈHYDoHԙ=e*GS(!~,ٗd"I5{˩LW0#yvqB0rЈHYDoHԙ=e*GS(!~,ٗd"I5{˩LW0#yvql,.]}1ꒋPJ/{0 v:l֕;$?u>6#?`Fjb?bZ?}[m+/}L]Py**?鰮~=7kqFyD%8 o.n7*TӣxE!C,W+sH7SHtDdB
SA?
29Mbʽ _Fcp!hioq(F#p9Mbʽ _FtxxuPAa=pII6S0np"1ڬlxW[Ms>B0rЈHYDoHԙ=e*GS(!~,ٗd"I5{˩LW0#yvqɛ>wA~̦]7d2IL
AIL&F "YA@5%KTg/
TT N`q蒷Q `pf\w%bzGvQ[x_K)*r9a'+*ˬ٣3ŏgJ@'LAzESgӊzNlt~ÌΛAuYRB}ueZS 9~gQsg;.Ok.iz=\D'pC
[pA'A_,ODdB
SA
?22,bS@Fb`!,bS@F8i1x+DQ=3_x3fk^hG+44&$[+4͡iv{= p8!)ōeQOSOڋ?xрlVBXj/eL({%2ٕ$
9/2\ 90GS4*'q9uR4DM[ŭ_+j>*Ѽ[%mǦUqlǂW<(vդ<Ŝ#UۙZߋ#\_?t_?+\qrvs qgo<8Q4VԜa3A9sǹbdBeٚLBEUJ9Ơljvy s51'ՊUT>}J=o!Ʋkd
7}tDdB
SA?
29Mbʽ _F*!p!hioq(F#p9Mbʽ _FtxxuPAa=pII6S0np"1ڬlxW[Ms>B0rЈHYDoHԙ=e*GS(!~,ٗd"I5{˩LW0#yvqr l014S[unCAOg*z&ib:ufBىd[⫥d[L}'7kfsa;o?T6u=h:GEC9jնCLh$5O147wkvlOxN'/1}P+ԇ~wSԳ4ߏQv(1ɏg/q%CCI}mޒ~D2/Hx7ow
m& Wun[bqMwbhexXOtv5#Xs7Zs\Zn0g=~_n)x?AIx($CIS}6COm?oS4WM ߎ>acL[W*AQ>y}LQ?xd3 ;}>O[9Ϙ_#EWI7u#t=Q'cauk YMo˓SUF+z^5S;u8S)+5]\応+1b'=.Yx)qz궠=SCN&ADdB
SA?2+3a8h}x/`!+3a8h}p`40MxuRKJA}U=ɘ4Q"P$d +]YVp:`EyU 9Yk9ϳD!H#m&.,/Xo;eD=Z{H3dմcePۈtofjMݲ=Ҕܿ&86$EOkIaZa0^:Ę{#?_
n$.#x+$xOǾht̓W DL݅Wj
z&_?!秊fYg< Zs4!3}RQDdTB
SA?2 QpzjmbS1p!6]8+ *b QpzjmbS\ XJx5MJ@_UTDA
.)=f9D!.Gnf f{.;
EWWz!/Ks,ˎ\H+ZѕV:/?\3d %HE#$G%9
R
FL_4U[zDdTB
SA?28xϔ?Lz=p]4`!8xϔ?Lz=p@ 0=XJJxuRJAfnH,?XjA. H:@,y4>XGܙz=ofvV!+cw2v3F(iEJ~nZ2j'z/ZbaJhuuIq̐UW9u#
R~
M}l11zJҔ_3rkߓtkE?6\Vƒ= e{Qr
?ʪLKO4R~(<_!0v? .
n64:SE1ۣ0K)W`DdtB
SA?2YDT<%ޑ56`!DT<%ޑh @0#xukA{46ii((~@իX`IO(Vܶ$/I{G7yovh&1o<uԁY`!)c&Ӕi/ϙ
\jxhv~(ؔ~vo?\'Y$
}zQ]NTʶfxs]=%Jm0P7.P
a,<?r$]$8pG{\m# yj,g2#'j>{c*iy,gh;ăͿEdd9JYԑ7
wҒdy8INL{yHuvG{DDd<;B
SA?2U<8eRD`9`!U<8eRD`0 hKdPxuTkSAW[jG%$
"xԛlOJ҂o_ɇP?`=ԫybܙ}oy/6vgfg!4@2"kPz=Tp5#arحtr0#ok^Ok_͉rpZ[lH]~ àoOG!CҨv獮+o !]^dӻ*{ @D;#gbFΞşW,.U)Mϋ
g\?OX\/Φ7Fwǃ,kN8AޓΧcfONi۪%o[ɳ15XھU$Էg<~ѐʗ߳t/KK67q+q+.j^jBGp]v]WpdbLބdUlpxę2{S2]X]#An*WRyo}.^5uC8C>#br<f[,@Kx0M
@lI9@Y\I.DdhB
SA?2\xodP8gBp!0yM#xodP@`\xQJA}3Ż<4HaZ@"90&Br}_X/lmE!$ܾ{nQLBnHPMCl)c
:ot
yU;t#X'B*+AORX8
\7F.>+'ݛCA9_\=43#NM.jn3taOȑ{zΖ˓Js=Eб?~ 2ӟ W:SWgNoy&:sElTg%bŲ0mg
SKMCxodPA`\xcL,0pXȰP!тAAHfnj_jBP~nb3^= A27)?A$цA2cab`exyϘ V21?2#DdHlB
SA?25aXVa~O}YEp! %!&YaXVa~O}>@
xRJAK傇FVG
"bJ*B$Ub80btA,?RO/+ƕW_#=ڱ?3͗>^)?IgP@TvDdlB
SA?2_PI=&$I`!_PI=&>
@PxRKPgb
)t:(ݵ`ɩU
ME;ٱcA78'噡?胗rwBx@UT;K%&D9A
PNv(':;G<> rj !d
*zR{3GXp5]W$1R/PhEEGo;FL`7=w+ZmƑ
~l*@FSLҸF ƣxNvNT#71"binׇɞpaO>i16s;N)9Gq~q@;\vC_Vc,'ӌq&3̿>0gϔ+5
2!/2HTA{CsDd@B
!
SA?2fw'9$?%KvZ:zc%A{A/T
6y8>ҔfMvIÈ+ξ+kF~Hatg@6Bf pux?"Vo<%)ŭLDCw'9$a4d)DdigB
SA?2Tlkv?tU0N`!(lkv?tUV&O(CzxZ
lE~o?j
j ` pH#AAID(\H `lJÿ`D*ODDD];ܛ{onvw@*K
`4`:/}.P5hG1:0>9KHgBa[CMյV#4͋۠[s$
;2[bI2Ό`u+\[rJhkN\wCh霗\qӆ+>X*zЊpQHJHU/l=@2aCAh=Jy&@sYO)Ӿ@v+TeqF[i_?w'X߷>#s/X)BDj&Pt7iķ"@m`n,g0aǑrA~zg"
9_ kJAѹ.ˏYc`Fw
t!_gɰ;]K]!'u{=+uJuw`~R5qxε42#4d'sp6ۍnp#Sb[&۩
n@3[\E*
Fƫ`DKr+;`PCP?_
?VÑ,Sf\˪pVws?ǳ/x?sHdE:ďNw
dJWJ\c&wZ6}SlñH5OΊtc Tv_
qdK8l8hc6JC2);Į,e5=ŭlXY6pXׇBF~RW`V(8sUFWeA:Gj.IR2M2Ǟjh^YJ?3~=#Xjl6N,*$X"vcyabW^,X 6&0;7%ℍ/&ɋy2VŌp5+K
GɑAa//ˆ*xne\7`K^,oD6NòI\%KXVDd__B
SA?2$>`չ2O[W`!$>`չ2O[*`D"x\tUs$)Xh6h屼DSyx
])J)GI YWEE(XsXT\E]YDq}ADꮊ >PLd$en3?3I\@(
F
t^T:19/ ҈3`~{=mY9%I$*yoB"DCn, ɒ6T.DQz/+^Q@ʿ)3dﰻcL IrZ'V(K6e8O$64vD^KkEq2ԞRdh"ƍdÉ"QQYϦkiHѿZBs(C1{RV#=)Yl'akӖ,nI/A>Io)4>[X%ڜol!C$jhN7p:٪!1OkC0I_җ=N&<.V(qIŴzqxR12
&K[GHmyB4%ȶdaU\Fh\R5?Z5B_WHfTly^Џ%
QJT´zqlIŨbh:+ޖ0v.QmA?@V05u\myjh$m
ꡫ j~v0Req M6_ly>:$C;2я) mf*RlaMXn`
k~Z̅aPZ+h:qrQhK,]l֙1^oI*Y!Ìp6#c+>Vw
eh~?C{NSF,YnL*>6wTg1ng=R
O@{>a'x=ݍS;t<>d(C& \
fGXg~AkBl<`Ʒqd7K6'B1d9C2 z?ª!51??]
RD
I*>I5#ms㷀OqhZZ2Iϝar\ ?7=>B_} ?Is
d*681e#gX
ëMkAT?~??KhnNr.>;t~{=P_},HovMRׯ~d=&QdIk(+(~f)jT%9?'#~0+lat
_'X)=mqHFTƮ/YX?Df~~3/tPDƔWZc]b,X51ՈJ1Blv{1וZɺ@W1N!@hQ,FF3%fp3U4=t8~*v/
Wj_=X`Oa[[8UKY_A;'@Їߵ
\n:iܕgnʞjʞ
Ք==/T=u*:uvi /wglʞjʞ
Ք==/TCFudTlzKn=h^8K$z3;=DM?Kzo7eѝyl'">.ǕnV)z/(G#6ґ#W[ }gQ=>
>~LcI0#>2d
N[9Je!yIq7z(<ϥ8.$2kG1UB}!<.8GʣofA+恙^JAW*Qiuvpsen'Xy)X.^IwzZubWI_xW)_JM7ԫ?R;)e/0}UOO
IKSe2GJ\Icw{tW]x]6\p^H^f*/`itY.vJYzzт:[Ȯx+PyNopW)ޫRq Ū1PQ]}$+ԏZ¥ ާ<A(Q9Ia4~[P'Ǆww^;T߿({6D߿
?ާr(#_O?
"QPd.bة^E7n0+]s&ivWvqBh[;= ;L,mK?u0i@?0еl$dzǋ;l]Ƌ1{㪂8hN"h71)3u#S\q,Yw &p:ލ^{7s类QwBSH<'jx.hlA/2 DdB
SA?2sـf.;^lOh`!Gـf.;^l@`8 :xcdd``fbd``beV dX,X faJ`v`41,f`c`P`ꁪaM,,He`V_C&,`Bܤt@ڈCïLo`Rjo&0 Qbr<W0@Lbғ _`bE\dAnғ;3\2#H8ㅑI)$5!`uLpDd,B
SA? 2C
0r77jp!搃`m4=ؿC
0r77d@xuP=a}3~!AQH8^:bE@pQ*kf,dͼ,8`T̲VIE!Ȑ6U
0#JR5_:k'CG\ɣ7ק$8;wᩓ\g:MT(bUjڔpbmaBɃgۛp,>ޚ^O/6CC
0r77tA]xc(aƇLyc`d`0MVKWMc`f֥dG3VyED{Dd@<
"
CA2bRA"h>~l`!6RA"hwY/xuQAjAU6
AsȒCrryABr`NKbYP g8<#@<Table of Figures
Kp@>@@TitleL$<@&5CJ KHOJQJ>.@>TOA HeadingMx5CJOJQJ@TOC 1N"@"TOC 2O"@"TOC 3P"@"TOC 4QX"@"TOC 5R "@"TOC 6S"@"TOC 7T"@"TOC 8Ux"@"TOC 9V@
!"#$%&'()*+,./0123456789:;<=>?@ABCFI{"%(+.147;>ADGJMSV[`ejoty~
#&+0589:;<=>?@ABCDEFGHIJKLMNOPQRSTUVWXYZ[\]^_`abcdefghijklmnopqrstuvwxyz{CDEFGHIJKL M
NOP
QRSTUVWXYZ[\]^_`abc d!e"f#g$h%i&j'k(l)m*n+o,pq.r/s0t1u2vW
[$@$NormalmH H@H Heading 1$<@&5CJKHOJQJB@B Heading 2$1$@&5CJhmH nH @@@ Heading 3$<@&CJOJQJD@D Heading 4$<@&5CJOJQJ6@6 Heading 5
<@&CJ:@: Heading 6
<@&6CJ:@: Heading 7
<@&OJQJ>@> Heading 8
<@&6OJQJD @D Heading 9
<@&56CJOJQJ<A@<Default Paragraph FontD1@DList Number
&F1$CJhmH nH LB@L Body Text#dx&d
Sj CJRC@RBody Text Indent1$dCJhmH nH @+@"@Endnote Text1$CJhmH nH > @2>Footer1$
9r CJhmH nH &)@A&Page Number4T@R4
Block Textx4P@b4Body Text 2dx2Q@r2Body Text 3xCJbM@bBody Text First Indentd&d
Sj CJjN@jBody Text First Indent 21$dxCJhmH nHFR@FBody Text Indent 2dxDS@DBody Text Indent 3
xCJ."@.Caption
xx5&?@&Closing,@,Comment TextL@Date8Y@8Document Map D OJQJ\$@\Envelope Address!&@/+DCJOJQJ:%@":Envelope Return"OJQJ.@2.
Footnote Text#,@B,Header
$
9r *
@*Index 1
%8*@*Index 2
&8*@*Index 3
'X8*
@*Index 4
( 8*@*Index 5
)8*@*Index 6
*8*@*Index 7
+x8*@*Index 8
,@8*@*Index 9
8:!@R:
Index Heading.5OJQJ$/@$List
/(2@(List 2
06(3@(List 3
1Q(4@"(List 4
2l(5@2(List 5
320@B2List Bullet 4
&F66@R6
List Bullet 2 5
&F67@b6
List Bullet 3 6
&F 68@r6
List Bullet 4 7
&F
69@6
List Bullet 5 8
&F6D@6
List Continue
9x:E@:List Continue 2
:6x:F@:List Continue 3
;Qx:G@:List Continue 4
<lx:H@:List Continue 5
=x6:@6
List Number 2 >
&F6;@6
List Number 3 ?
&F
6<@6
List Number 4 @
&F6=@6
List Number 5 A
&FT@"T
Macro Text"B
` @
OJQJmH `I@2`Message Header&Cn$d%d&d'dDCJOJQJ2@B2
Normal IndentD,O@,Note HeadingE0Z@b0
Plain TextFOJQJ(K@(
SalutationG*@@* SignatureH:J@:SubtitleI$<@&CJOJQJD,@DTable of Authorities
J3w4x5y6z7{89}:~;<=>?@A}zvtrpnmlkjihgfedcba`_^QOMKHDB@>
!"#$%&'()*+,./0123456789:;<=>?@ABB
!"#$%&'()*+,./0123456789:;<=>?@ABCFI{"%(+.147;>ADGJMSV[`ejoty~
#&+0589:;<=>?@ABCDEFGHIJKLMNOPQRSTUVWXYZ[\]^_`abcdefghijklmnopqrstuvwxyz{~
!"#$%&'()*+,./0123456789:;<=>?@ABCDEFGHIJKLMNOPQRSTUVWXYZ[\]^_`abcdefghijklmnopqrstuvwxyz{}~;CMRWV[;oUtwz҂>`
(p0IUZ\]?^`(XNt ,>Q8ZUZrZZ\[.\\8]] ^&^?^zR
bCRZk[]?^Unknownargm666MMM[\\\\\]9];]T]h]j]]]]I^]^_^^^^%_F_H_``/`aaaOcccec\dpdrdeeelfffg%g'g5gIgKguvvv/*>@
+

:::::::::::::::::::::::
!/X"$u+o2.CiW[CM><"$x'cXTTgQ]I)@>/( 04
6
6
5
A 6
8
A TN [E(
:T
;
C1E(N
<
31t'NB
=
31c
]d
>
B>1?
G
NB
?
31]
@
B@1#G
NB
A
31]
B
BB1tG
NB
C
311]2
D
BD1G
NB
E
31c
]^NB
FB
31]c
^NB
G
31]T^
H
BH1f
6
NB
IB
31c
NB
J
31T
K
BK1f
NB
LB
31c
M
BM1T2
y
NB
NB
318c
9
O
BO1T
NB
PB
31c
Q
BQ1T
NB
R
31c
d
]T
S
C1c
]
T
B\C DE(F01fx?Qu)M8k$$$$2$V6VHVHhHzHZl~~/S+ P t t !t 3t Et Wt it {t t t 0 B T f x  ? Q c u ) < N ` r & 8 J ] o # 5 G Y k ~ 2 D V h z / A S e w , > P b t
)
;
M
_
q
& 8 J \ @c
U
B\CDEF1${TxQ)`n5$k$}$$$262H2HDHzZZl~Sewwwwww!3EWi{0BTfx ?Qcu)<N`r&8J]o#5GYk~ 2DVhz/ASew , > P b t
)
;
M
_
q
&8J\@c
V
hB\CDEF1HiBxQ)r\#$G$k$$$6HH ZzZl~~!3EWi{0BTfx ?Qcu)<N`r&8J]o#5GYk~ 2DVhz/ASew , > P b t
)
;
M
_
q
&8J\@c
W
PB\CDDEF1Z~3W0xQ)J\$#$G$$$6HHZZDlD~DDDDDDDDD!D3DEDWDiD{DDDDDDDDDD0DBDTDfDxDDDDDDDD DDD?DQDcDuDDDDDDDDDD)D<DND`DrDDDDDDDDDD&D8DJD]DoDDDDDDDDDD#D5DGDYDkD~DDDDDDDDD D2DDDVDhDzDDDDDDDDDD/DADSDeDwDDDDDDDD D D, D> DP Db Dt D D D D D D D D
D
D)
D;
DM
D_
Dq
D
D
D
D
D
D
D
DDD&D8DJD\D{@c
Cd
X
(B\CDEF1lExQ)8Jn$$#$6HZl~!3EWi{0BTfx ?Qcu)<N`r&8J]o#5GYk~ 2DVhz/ASew , > P b t
)
;
M
_
q
&8J\qt@c
d
Y
(B\CDEF1Z~!W0xQ)`&J$$$6HZl~!3EWi{0BTfx ?Qcu)<N`r&8J]o#5GYk~ 2DVhz/ASew , > P b t
)
;
M
_
q
&8J\qt@c
L
Z
BJCnDEF1HiBxQ)Mn$n6nHnZnln~nnnnnnnnn!n3nEnWnin{nnnnnnnnnn0nBnTnfnxnnnnnnnn nnn?nQncnunnnnnnnnnn*n<nNn`nrnnnnnnnnnn&n8nKn]nonnnnnnnnnn#n5nGnYnln~nnnnnnnnn n2nDnVnhnznnnnnnnnnn/nAnSnenwnnnnnnnn n n, n> nP nb nt n n n n n n n n
n
n)
n;
nM
n_
nq
n
n
n
n
n
n
n
nnn&n8nJnkl@u
j4
[
BJCDEF1$fx?Qu);$6HZl~!3EWi{0BTfx ?Qcu*<N`r&8K]o#5GYl~ 2DVhz/ASew , > P b t
)
;
M
_
q
&8Jeh@u
\
BJCuDEF1xQuu$u6uHuZulu~uuuuuuuuu!u3uEuWuiu{uuuuuuuuuu0uBuTufuxuuuuuuuu uuu?uQucuuuuuuuuuuuu*u<uNu`uruuuuuuuuuu&u8uKu]uouuuuuuuuuu#u5uGuYulu~uuuuuuuuu u2uDuVuhuzuuuuuuuuuu/uAuSueuwuuuuuuuu u u, u> uP ub ut u u u u u u u u
u
u)
u;
uM
u_
uq
u
u
u
u
u
u
u
uuu&u8uJuWX@u
nNB
]
31u
v
^
B^1.G
_
B_1eG
`
B`1v#
j
a
Ba1&#
j
b
Bb1
.D
c
Bc1e
D
d
Bd1v
e
Be1&
f
Bf12.y
g
Bg1e2y
h
Bh1v
i
Bi1&
j
Bj1y.
k
Bk15o
l
Bl1(o
m
Bm1}Q
n
Bn1Q
NB
o
31]
p
Bp1["G
NB
q
31]
r
Br1vG
NB
s
31i"]j"
t
Bt1!T#G
NB
u
31%]%
v
Bv1`%&G
NB
w
31]'^NB
xB
31]^NB
y
31]p^
z
Bz1I6
NB
{B
31NB

31p
}
B~}1I
~NB
~B
31
B}1p27y
}NB
B
3189
B1p7
NB
B
31
B{1p7
{NB
31]T
C1']
`BC
DEF1fx?Qc);8k$$$$26V6hHhHzZZZZ/l///AS + b t ! ! 3 E E W i { {
(
0(
B:
T:
f:
fL
xL
L
L
L
^
^
^
^
^
q

?
Q
c
u
*
<
N
`
`
r
'
9
K
]
o
#
5
H
Z
l
~
2
D
V
h
{
/
A
S
e
w
,
>
P
b
t q
q
q
q
q
q
q
q
q
^
^
)
^
;
^
M
^
_
^
q
^
^
L
L
L
L
L
L
L
L
:
&:
8:
J:
\:
n:
:
:
(
(
(
(
(
(
(
@'t
8BC
DEhFp1${TxQ)`n5$k$}$$626DHDHVZzZZllSw !+ 3+ E> WP iP ib {b b t t t 0 B B T f x  ? Q c
u
*
<
N
`
r
'
9
K
]
o
# 5 H Z l ~ 2 D V h { / A S e w , > P b t t t t t t t t b
b
b )
b ;
b M
b _
b q
P
P
P
P
P
P
>
> > > &> 8> J+ \+ n+ + + + + + @'
xBC DEF1HiBxQ)r\#$Y$k$$66 H H2ZDZzZZllAASew!33EW i i { + + + > > P P P b b b 0b 0t Bt Tt ft xt x  ? Q c u * < N ` r ' 9 K ] o ot t t t t t t t b b b #b 5b Hb Zb ZP lP ~P P P P > > > > > + 2+ D+ V+ h+ { / ASew , > P b t
)
;
M
_
_
q
&8Jw\wnwwwweeeeee@'
hBC> DEF1Z~3W0xQ)Jn$5$Y$$666HHZ2ZzZl/ASeew!!33EWWi{{ 0 B T f+ x+ + + + + + + > > > > > ?> Q> c> u> > > > + + + + + + *+ <+ N+ `+ r+ r ' 9 K ] ]o#5HZll~ 2DVh{wwwwwwe/eAeSeeeweSSSSSAA A A, A> /P /b /t / / /
)
;
M
_
q
q
&8J\n@'@l
0BCDEdFl1lExQ)8J$$$G666HZZVZhlzl//ASe!e3wEwEWii{0BTfx ?Qcu*<N``r'9K]oowwwwweeeeeS#S5SHSZAlA~AA///// 22DVh{/ASew , > > P b t
z
z)
z;
zM
z_
zq
h
h
h
h
h
h
V
VVV&V8VJVJD\DnDDDDD22222@'
xBCDEF1Z~3W0xQ)r&8n#$}$$$66HH H2ZDZVmhmz/!/3AEAWSiSie{eeeewwwwwww0BTfxw www?wQwcwuwweeeeeeSSS*S<SNA`ArA////'9KK]o#6HZZl~ 22zDzVziz{zhhhVVVVVDD/DADSDe2w2222 , > P b t t
)
;
;
M
_
q
&&8J\n@'
BCDEF1HiBxQ)`8J$5$Y$k$6666HHZZm m2DVhz!!3EWii{0BTfx ?Qcu*<NN`r'z9zKh]hohhVVVDDDD222#2# 6 H Z l~ 22DVi{/ASeww}}} } k, k> kP kb kt Y Y Y Y Y G G G
G
G)
G)
5;
5M
5_
5q
5
5
5
#
#
#
#
#
###&8J\n@'
pBCDEF1${TxQu)M&$$$56G6Y6k6}HHZmm 22DVhzz!3EWi{0BBTfx zzz?zQhchuhuVVVDDDD222 * *<N``r'9K]o}}}k#k6k6YHYZYlY~GGGG5555## #2#2DVi{/ASew , > P b t
)
n;
nM
n_
nq
n
n
\
\
\
\
\
\\JJ&J8JJJ\JnJJ88888888@'
BCDEF1fx?Qc);r&8J$$$66HH#H5Z5ZGZYmYmk!33EWi{0BTfx }}?}?kQkckcYuYYGGG555##**<N``r'9K]]onnn\\\\JJJ#J#868H8Z8l&~&&& 22DVi{//ASeww , r> rP rb rt r r ` ` ` ` ` ` ` M
M
M)
M;
MM
M_
Mq
;
;
;
;
;
;
;
;
)))&)8)J)\)\n@'
BCZDEF1Z@N
Bz1T#5
$
z
By1j!
y
Bx1{?m
x
Bw1\$&%m
w
Bv1
v
Bu1#6$
u
Bt1" p#P
t
Bs1$\&f
s
Br1F !
r
Bq1#;%
q
Bp1"$
p
Bo1%'+
o
Bn1!u"
n
Bm1$%`
m
Bl1OH!
l8Z p(
w
[b
Bk1}
t
kHB
#1
"#N
31px
Bj1 i
{
jN
31)
Bi1#
i
Bh1
(
h
Bg1
3w
g
<f1?}5
f
<e1?$&6
e
<d1?ie
d
<c1?Y O
cNB
B
31%t` #"T%
#!}#Dl2
01?X!$"$
Hb1? #"T%
bt` #J%
##bl2
01?#$
Ha1?#J%
at` b*"B#
#sl2
01?^"0#
H`1?b*"B#
`t` *!
#M
#7x
l2
01?!Q"
H_1?*!
#M
_t` #"T%
#%\l2
01?X!$"$
H^1? #"T%
^t` #J%
#bl2
01?#$
H]1?#J%
]t` b*"B#
#k
Kl2
01?^"0#
H\1?b*"B#
\t` *!
#M
#
x
l2
01?!Q"
H[1?*!
#M
[;N Q]$
PN
Q
31Q>$HB
R
#1"HB
S
#1k"#HB
T
#1kHB
U
#1kHB
V
#1
""#HB
W
#1
}HB
X
#1}HB
Y
#1}HB
Z
#1""}
[
BCNDEF$1N&
@" p
\
BCDEF$1x[H0 @ wN
]
312N
^
317N
_
31nN
`
31a
N
a
31YN
b
31LrN
c
31a eN
d
31BHB
e
#1PwHB
f
#1wHB
gB
#1wHB
h
#1PwN
i
31HB
j
#1BHB
k
#1BmHB
lB
#1B(4L` p
FThe limits of counting accuracy in distributed neural representationsMsNECO/GardnerMedwin/Ms.2077/cy argmGarrgmWORD97 DocumentNormal.dotmargml.d3gmMicrosoft Word 8.0/@Ik@Sf@H0f@3f(!՜.+,D՜.+,<
px
uclntjFThe limits of counting accuracy in distributed neural representationsTitle 6>
_PID_GUIDAN{F7863777EF5311D3AAD40080C83FF91F}
FMicrosoft Word Document
MSWordDocWord.Document.89qmHB
m
#1BN
n
31mHB
o
#1%HB
p
#1%PHB
qB
#1%PHB
r
#1%N
s
31kZHB
t
#1tHB
u
#18HB
vB
#1t8HB
w
#1N
x
31cBHB
y
#1lHB
z
#1%HB
{B
#1l%HB

#1N
}
31VHB
~
#1[HB
#1HB
B
#1[HB
#1N
31k HB
#1t HB
#1 HB
B
#1t HB
#1 N
31xzf
BZ1xS
ZN
31xXf
BY1x]S
YN
31x7f
BX1x;S
XN
31
BW1
%
WN
31
BV1%
VN
31 I
BU1%6
UN
31! $
BT1!%w$
TN
31Q>$N
31
Q$N
31,"
BS1,#
SN
31
hSN
31,@
BR1,J
RN
31Q]>$cHB
#1H
HB
#1k
HB
#1kHB
#1kk l HB
#1kHMHB
#1
"
HB
#1
[
HB
#1[
HB
#1[
HB
#1"[
"
BCDEF$1 @
BCDEF$18a @J
N
31N
31N
31nN
31aN
31YN
31LN
31a N
31 HB
#1# J HB
#1J u HB
B
#1J u HB
#1# J N
31 0
HB
#1 HB
#1
HB
B
#1
HB
#1 N
31M
HB
#1W
HB
#1
HB
B
#1
HB
#1W
N
31k
ZHB
#1t
HB
#18HB
B
#1t8HB
#1
N
31cwHB
#1lHB
#1HB
B
#1lHB
#1N
31VHB
#1[;HB
#1;bHB
B
#1[;bHB
#1;N
31k [
HB
#1t
HB
#1
:
HB
B
#1t
:
HB
#1
N
31Y
f
BQ1Y
\v
QN
31
f
BP1
WS
PN
314fR
BO14W+
ON
31f/
BN1W
NN
31
C
BM1
MN
31C
BL1
LN
31IC
BK16
KN
31!$C
BJ1!w$
JN
31Q]>$cN
31
B N
31
U
BI1
Z
IN
31
BH1
HN
31
{
BG1
GN
31
[$cN
31#"P
BF1##Z
F
<E1?ln
E
<D1?`
DB
S ?e'5148}7B4:gx 4HtP
I4 _1021478483_1022436916_1022437050
_993922774_1021475438_1021475577_1021475603_1021557641_1021557846@@@@@@@@@5ee'(c
c
d
u
w
w
x
y
y
z
z
{
5ee'(c
d
u
w
w
x
y
y
z
z
{
argmD:\argm\mss\HBCOo.DOCargm&D:\TEMP\AutoRecovery save of HBCOo.asdargm&D:\TEMP\AutoRecovery save of HBCOo.asdargmD:\argm\mss\HBCOo.DOCtonyC:\ARGM\mss\hbb\HBCOp.DOCucgbargD:\argm\mss\hbb\NECO2077.docargmD:\argm\mss\hbb\NC2077.docargm'D:\TEMP\AutoRecovery save of NC2077.asdargmF:\argm\mss\hbb\NC2077a.docargmF:\argm\mss\hbb\NC2077a.doc4exA}V@~"k7?>87R6jl5T."4 `m5 ....OJQJo(OJQJo(OJQJo(OJQJo(hh.hhOJQJo(hho()hh)hh.@hh)`m5`m5 ~}( 3@hh)@45mm`&'()789:?@}B}C}D}E}F}H}I}J}K}L}M}W}XYZ[]^mbmcefghijk)l)m)nEoEpEqErEsEtEuEvEwExEzE{~222224444````uuu++BBB B
@*,.`@>@B@J@NPRT@XZ\^`@l@pr@v@@@$@L@T@`@l@t@@@@(T@24l@8:<@@B@FH@N@TVX@\@GTimes New Roman5Symbol3&ArialKM2Times New Roman3Times5&Tahoma?5 Courier New"1h+[&+[&+[&(!n)q0dtEThe limits of counting accuracy in distributed neural representationsNECO/GardnerMedwin/Ms.2077/argmargmCompObjj