Chapter Revision History Numenta

Chapter Revision History Numenta-Free PDF

  • Date:18 Apr 2020
  • Views:48
  • Downloads:0
  • Pages:15
  • Size:824.91 KB

Share Pdf : Chapter Revision History Numenta

Download and Preview : Chapter Revision History Numenta

Report CopyRight/DMCA Form For : Chapter Revision History Numenta


Any type of concept can be encoded in an SDR including different types of sensory data words locations and. behaviors This is why the neocortex is a universal learning machine The individual regions of the neocortex operate on. SDRs without knowing what the SDRs represent in the real world HTM systems work the same way As long as the. inputs are in a proper SDR format the HTM algorithms will work In an HTM based system knowledge is inherent in. the data not in the algorithms, To better understand the properties of SDRs it can be helpful to think about how information is typically represented. in computers and the relative merits of SDRs versus the representation scheme used by computers In computers we. represent information with bytes and words For example to represent information about a medical patient a computer. program might use one byte to store the patient s age and another byte to store the patient s gender Data structures. such as lists and trees are used to organize pieces of information that are related This type of representation works. well when the information we need to represent is well defined and limited in extent However AI researchers discovered. that to make a computer intelligent it needs to know a huge amount of knowledge the structure of which is not well. For example what if we want our intelligent computer to know about cars Think of all the things you know about cars. You know what they do how to drive them how to get in and out of them how to clean them ways they can fail what. the different controls do what is found under the hood how to change tires etc We know the shapes of cars and the. sounds they make If you just think about tires you might recall different types of tires different brands of tires how. they wear unevenly the best way to rotate them etc The list of all the things you know about cars goes on and on. Each piece of knowledge leads to other pieces of knowledge in an ever expanding web of associations For example. cars have doors but other objects have doors too such as houses planes mailboxes and elevators We intuitively know. what is similar and different about all these doors and we can make predictions about new types of doors we have. never seen before based on previous experience We our brains find it easy to recall a vast number of facts and. relationships via association But when AI scientists try to encode this type of knowledge into a computer they find it. In computers information is represented using words of 8 32 or 64 bits Every combination of 1 s and 0 s is used from. all 1 s to all 0 s which is sometimes called a dense representation An example of a dense representation is the ASCII. code for letters of the alphabet In ASCII the letter m is represented by. Notice it is the combination of all eight bits that encodes m and the individual bits in this representation don t mean. anything on their own You can t say what the third bit means the combination of all eight bits is required Notice also. that dense representations are brittle For example if you change just one bit in the ASCII code for m as follows. you get the representation for an entirely different letter e One wrong bit and the meaning changes completely. There is nothing about 01101101 that tells you what it represents or what attributes it has It is a purely arbitrary and. abstract representation The meaning of a dense representation must be stored elsewhere. In contrast as mentioned earlier SDRs have thousands of bits At every point in time a small percentage of the bits are. 1 s and the rest are 0 s An SDR may have 2 000 bits with 2 or 40 being 1 and the rest 0 as visualized in Figure 1. Figure 1 An SDR of 2000 bits where only a few are ON ones. In an SDR each bit has meaning For example if we want to represent letters of the alphabet using SDRs there may be. bits representing whether the letter is a consonant or a vowel bits representing how the letter sounds bits representing. where in the alphabet the letter appears bits representing how the letter is drawn i e open or closed shape ascenders. descenders etc To represent a particular letter we pick the 40 attributes that best describe that letter and make those. bits 1 In practice the meaning of each bit is learned rather than assigned we are using letters and their attributes for. illustration of the principles, With SDRs the meaning of the thing being represented is encoded in the set of active bits Therefore if two different. representations have a 1 in the same location we can be certain that the two representations share that attribute. Because the representations are sparse two representations will not share an attribute by chance a shared bit attribute. is always meaningful As shown in Figure 2 simply comparing SDRs in this way tells us how any two objects are. semantically similar and how they are different, Figure 2 SDR A and SDR B have matching 1 bits and therefore have shared semantic meaning SDR C. has no matching bits or shared semantic meaning, There are some surprising mathematical properties of SDRs that don t manifest in tradition computer data structures. For example to store an SDR in memory we don t have to store all the bits as you would with a dense representation. We only have to store the locations of the 1 bits and surprisingly we only have to store the locations of a small number. of the 1 bits Say we have an SDR with 10 000 bits of which 2 or 200 are 1 s To store this SDR we remember the. locations of the 200 1 bits To compare a new SDR to the stored SDR we look to see if there is a 1 bit in each of the. 200 locations of the new SDR If there is then the new SDR matches the stored SDR But now imagine we store the. location of just 10 of the 1 bits randomly chosen from the original 200 To see if a new SDR matches the stored SDR. we look for 1 bits in the 10 locations You might be thinking But wait there are many patterns that would match the. 10 bits yet be different in the other bits We could easily have a false positive match This is true However the chance. that a randomly chosen SDR would share the same 10 bits is extremely low it won t happen by chance so storing ten. bits is sufficient However if two SDRs did have ten 1 bits in the same location but differed in the other bits then the. two SDRs are semantically similar Treating them as the same is a useful form of generalization We discuss this. interesting property and derive the math behind it later in this chapter. Another surprising and useful property of SDRs is the union property which is illustrated in Figure 3 We can take a set. of SDRs and form a new SDR which is the union of the original set To form a union we simply OR the SDRs. together The resulting union has the same number of bits as each of the original SDRs but is less sparse Forming a. union is a one way operation which means that given a union SDR you can t say what SDRs were used to form the. union However you can take a new SDR and by comparing it to the union determine if it is a member of the set of. SDRs used to form the union The chance of incorrectly determining membership in the union is very low due to the. sparseness of the SDRs, Figure 3 A union of 10 SDRs is formed by taking the mathematical OR of the bits New SDR membership.
is checked by confirming 1 bits match Note the union SDR is less sparse than the input SDRs. These properties and a few others are incredibly useful in practice and get to the core of what makes brains different. than computers The following sections describe these properties and operations of SDRs in more detail At the end of. the chapter we discuss some of the ways SDRs are used in the brain and in HTMs. Mathematical Properties of SDRs, In this section we discuss the following mathematical properties of SDRs with a focus on deriving fundamental scaling. laws and error bounds, Capacity of SDRs and the probability of mismatches. Robustness of SDRs and the probability of error with noise. Reliable classification of a list of SDR vectors,Unions of SDRs. Robustness of unions in the presence of noise, These properties and their associated operations demonstrate the usefulness of SDRs as a memory space which we. illustrate in examples relevant to HTMs In our analysis we lean on the intuitions provided by Kanerva Kanerva 1988. 1997 as well as some of the techniques used for analyzing Bloom filters Bloom 1970 We start each property. discussion with a summary description and then go into the derivation of the mathematics behind the property But. first here are some definitions of terms and notations we use in the following discussion and throughout the text A. more comprehensive list of terms can be found in the Glossary at the end of this book. Mathematical Definitions and Notation, Binary vectors For the purposes of this discussion we consider SDRs as binary vectors using the notation.
0 1 for an SDR The values of each element are 0 or 1 for OFF and ON respectively. Vector size In an SDR 0 1 denotes the size of a vector Equivalently we say represents the total. number of positions in the vector the dimensionality of the vector or the total number of bits. Sparsity At any point in time a fraction of the bits in vector will be ON and the rest will be OFF Let denote the. percent of ON bits Generally in sparse representations will be substantially less than 50. Vector cardinality Let denote the vector cardinality which we define as the total number of ON bits in the vector. If the percent of ON bits in vector is then 0, Overlap We determine the similarity between two SDRs using an overlap score The overlap score is simply the number. of ON bits in common or in the same locations between the vectors If and are two SDRs then the overlap can be. computed as the dot product, Notice we do not use a typical distance metric such as Hamming or Euclidean to quantify similarity With overlap we. can derive some useful properties discussed later which would not hold with these distance metrics. Matching We determine a match between two SDRs by checking if the two encodings overlap sufficiently For two. If and have the same cardinality we can determine an exact match by setting In this case if is less than. w the overlap score will indicate an inexact match. Consider an example of two SDR vectors,0100000000000000000100000000000110000000. 1000000000000000000100000000000110000000, Both vectors have size 40 0 1 and 4 The overlap between and is 3 i e there are three ON bits in. common positions of both vectors Thus the two vectors match when the threshold is set at 3 but they are not an. exact match Note that a threshold larger than either vector s cardinality i e implies a match is not possible. Capacity of SDRs and the Probability of Mismatches. To be useful in practice SDRs should have a large capacity Given a vector with fixed size n and cardinality w the number. of unique SDR encodings this vector can represent is the combination choose. Note this is significantly smaller than the number of encodings possible with dense representations in the same size. vector which is 2 This implies a potential loss of capacity as the number of possible input patterns is much greater. than the number of possible representations in the SDR encoding Although SDRs have a much smaller capacity than. dense encodings in practice this is meaningless With typical values such as 2048 and 40 the SDR. representation space is astronomically large at 2 37 1084 encodings the estimated number of atoms in the observable. universe is 1080, For SDRs to be useful in representing information we need to be able to reliably distinguish between encodings i e.
SDRs should be distinct such that we don t confuse the encoded information It is valuable then to understand the. probability with which two random SDRs would be identical Given two random SDRs with the same parameters and. the probability they are identical is, Consider an example with 1024 and 2 There are 523 776 possible encodings and the probability two random. encodings are identical is rather high i e one in 523 776 This probability decreases extremely rapidly as increases. With 4 the probability dives to less than one in 45 billion For 2048 and 40 typical HTM values the. probability two random encodings are identical is essentially zero Please note 2 reflects the false positive probability. under exact matching conditions not inexact matching used in most HTM models this is discussed later in the chapter. The equations above show that SDRs with sufficiently large sizes and densities have an impressive capacity for unique. encodings and there is almost no chance of different representations ending up with the same encoding. Overlap Set, We introduce the notion of an overlap set to help analyze the effects of matching under varying conditions Let be an. SDR encoding of size with bits on The overlap set of with respect to is defined as the set of vectors. of size with bits on that have exactly bits of overlap with The number of such vectors is x where. denotes the number of elements in a set Assuming and. The first term in the product of 3 the number of subsets of with bits ON and the second term is the number of. other patterns containing bits of which bits are ON. The overlap set is instructive as to how we can compare SDRs reliably i e not get false negatives or false positives even. in the presence of significant noise where noise implies random fluctuations of ON OFF bits In the following sections. we explore the robustness of SDRs in the presence of noise usin. Chapter Revision History The table notes major changes between revisions Minor changes such as small clarifications or formatting changes are not noted Version Date Changes Principal Author s 0 4 Initial release A Lavin S Ahmad J Hawkins 0 41 Dec 21 2016 Replaced figure 5 and made some clarifications S Ahmad

Related Books