Monday, 31 October 2011

The Human Brain: Your very own supercomputer

We get amazed after hearing about computers that have unbelievably large memories and processing speeds, but we fail to recognize that the best supercomputer was always within us- our brain.Yes, that's true.You must have heard of computers having thousands of Terabytes(1 Terabyte(TB)= 1024 Gigabytes(GB)), but when we talk of the memory size of the human brain,we need an even bigger unit - the Petabyte(1 Petabyte(PB)= 1024 TB). In fact, the best approximation of human memory is 2.5 PB(2.5 * 1024 * 1024 = 2621440 GB) .Wow, that's something. In analogical terms, this, of course, is the secondary memory, i.e the memory in which we store permanent data just like a computer does in HDDs.

Actually, the brain is composed of around a 100 billion neurons, on an average, and all of them connect to form an interconnected network. On an average, each neuron connects with 1000 other neurons. These neurons respond to the neural impulses that the brain receives from the nervous system, which itself is handled by the brain. In simple terms, its safe to assume that every single neuron is a tiny computer in itself. It has some processing capabilities, it has some memory and it treats the neural impulses just like computer responds to commands.All of these neurons connect and coordinate to make up for the overall storage that we talked about in the first para, but the processing is divided among each and every one of them.

How a neuron works, is a very complex thing, but the overall system is such that tasks are divided into portions and the neurons individually solve these portions. Who divides the task into portions - another set of neurons. Hence, as it goes, different neurons assume different roles as and when demanded. Some of them would do the division of work, some of them would do small scale computations and some of them would add up the results to form the overall results.Every neuron is capable of doing any function, just like the processor units in supercomputers.

Actually, supercomputers are composed of processor and memory arrays in such a way that processors may have their separate memories that they can use for calculations. In this way, any processor can be programmed to do anything. The same goes with neurons of the brains.The processing of data, is more or less the same. However, the supercomputer processor units don't have anything to do with permanent storage(secondary memory). Secondary memory is organized as separate RAID/LVM arrays with separate memory controllers for access. Hence, the neuron, which can handle all the departments on its own, is more superior than supercomputer units.

Now, the concept of primary memory is not applicable in brain, because a the brain is always online. Hence, logically, the brain treats short term and long term storage alike, but its just that it either disposes or retains the short term storage as per the demand. For example, you make a big calculation which involved prediction of sales on the basis of some market values. Now you start working with some predefined calculation logic and the market values. The logic is already there in your neurons, and the values are loaded temporarily. The neurons work out on the calculations and give you the end results.Now while making the calculations, you would have made some sub-calculations like carrying of digits, adding multiple figures to get a single figure.Now, you will hardly refer back to the carries that you made or the intermediate results, so after some time, the brain will flush them.Whereas, the final figures results will be retained for a longer time, as you would use them again. The logic will also be retained because it was there, and since it was used here, so it may be needed again. So, in short, brain stores important things and discards the less important ones.Importance is determined by how frequently you used that thing.So, we can assume that the neurons store all the information, both permanent and temporary, and both of them are equally accessible, unlike computers, where primary memory is more easily accessible than secondary.

Now, if we assume that some neurons store permanent data and cannot participate in calculations, even then we have tonnes of neurons which can. Even if we take 1 percent of the total to be available, we have 26214 GB of memory available for computations. This is analogous to RAM of supercomputers. Ever imagined a computer with this much of RAM.Moreover, every neuron is capable of processing the data that it has and the data that it gets from the signals. Hence if we assume an analogy here, then the processing capabilities of the brain would be at least 100 times of the latest supercomputer by Cray. Supercomputers may have specialized architectures,separation of floating point operations, or even AI based task-allocation schemes, but no supercomputer can,at least for the next 10 years, even think of matching the human brain. You are not aware of this, but let me make you get some closer.

When you see things, you are using the feed from a high-definition camera- your eyes. The camera is so good that the transfer rate is in GBs/sec. Same is the story with your ears and other sensory organs. The brain keeps getting data from these "peripherals" and keeps processing it. It also stores the important impulses and stores a large amount of them in the subconscious memory. In short, even when we are doing nothing, the brain is working on TBs of data. Now when we do some extensive calculation tasks, we are using just a small portion of the available capabilities. As per a research, the average human being uses only a percent or 2 of his brain's capabilities. Einstein used around 3-4 percent, though. And everyone knows what he did. So, if just by using a mere 3 percent of such a large setup, a man was able to change the world forever, well, you can imagine what will happen, if someone uses all of it.

Brain is in fact a very complex computer. You must have heard of the dream theories. Dreams are thought to be brain's own creations. Brain simply combines stored things from the past and creates new meaningful things. Trust me, no computer in the world is capable of doing it. So, I just want you to speculate for a while on the amazing capabilities that this organ in our skull possesses. Yep, you are the owner of the best computer in the world.







Saturday, 22 October 2011

The memory bound on AI systems: The move towards self-awareness

Often a lot these days, we hear about the memory explosion in the world of computers. This explosion basically refers to the increase in the amount of memory per unit size.And there indeed has been an unanticipated memory explosion.Recall the days of SDR SDRAMS where computers used to have 64/128/256/512 MBytes  RAM. And compare that situation with the current times. The computers of the present era have the DDR SDRAMs. General RAM sizes have gone up to 8 GB/16GB/64GB and even more than that in specialized architectures.Even in terms of storage, there has been quite an improvement. There has also been an explosion in the context of processing speeds. Compare 686 series with the i-series and you will find a breathtaking improvement.This was in context of the general scenario, but in case of AI systems, we generally use specialized architectures like the ones used in supercomputers. In such systems, primary memories are in terms of TeraBytes and secondary storage is in terms of Petabytes.Even the MIPS rates are much much higher than the general computers.But even after all these improvements, in case of generic Artificially intelligent systems , the processing speed explosion may suffice, but the explosion in memory(both primary memory and secondary) ain't that good.

The reason for this is that AI systems that are capable of learning, may  need tonnes of memory to remain stable and effective. AI systems, unlike conventional computing systems, organize their memory in the form of neural networks mainly(Although there is an entire variety of knowledge representation structures that have been used in AI systems, we will concentrate on neural networks to keep the discussion simple).Whereas conventional computers have tree like directory structures and file systems, the AI systems form an entirely connected network which is much more exhaustive and much more effective in the context of what AI systems have to do.Neural networks, are an imitation of the human brain. A neural network is composed of nodes and connectors just like the neurons and connectors in our brains.Like the impulses that are transferred between the different neurons of our brain, the nodes of a neural network too transfer signals(information) among them.

Now we will try to see how a neural network actually works. Look at this diagram :


This is the diagram of a basic neural network. Every neural network has 3 layers of nodes : input nodes, hidden nodes and output nodes. Input nodes are passive, which means that they do not contain any information and that they do not manipulate the information that comes to them. The input nodes simply pass on the data(data means variables.we will consider these variables to be numbers in this case) that they get to the many connectors that leave them. For example, in the above figure, look at the first input node. It gets a single variable(X11) and passes on the same to the four connectors that connect to 4 hidden nodes.


The hidden nodes(internal nodes in the middle layer) as well as the output nodes are not passive. Every connector that connects to a hidden node multiplies that value that it carries with a weight. Weight here is just a number.For example,if we had a value of 10 coming to a hidden node and the weight on that connector was 0.7, then the weighted value will be 0.7 * 10 = 7.  So what comes to a hidden node is a set of weighted values. The hidden node contains a sigmoid function which simply strives to combine all these weighted values into a single number. This number should lie between 0 and 1. So every hidden node gives an output that lies between 0 and 1.

After that, the output nodes receive values from the hidden nodes.Output nodes have multiple input connectors, but only a single output. So these nodes combine the input values to reduce the number of outputs that the network produces. Hence they too manipulate the information that they get.

There can be multiple layers of input nodes,hidden nodes and output nodes. Input nodes connect to either more input nodes or to hidden nodes. Whereas the hidden nodes either connect to more hidden nodes or to the output nodes.In this was we get a fully interconnected neural network.

So, this was how a neural network keeps information in it. The input nodes accept the raw information and the output nodes present the results of applying the knowledge. The weights here form the most important part, because it is these weights only that determine how well the results will be. So the overall problem of having effective knowledge comes down to fine calibration of the weights.

Now coming back to the original problem. AI systems are of two types. One that have an existing set of neural networks and no new neural networks are added during operation.Expert systems come under this category.Expert Systems are AI systems that are specialized for some particular task. They are just like human experts.In these types of systems, the amount of knowledge which the AI system will use is already known. In these AI systems, the knowledge is structured in the form of neural networks and as the system starts working, it starts using this knowledge to solve problems. Now as the system works, it keeps on improving its knowledge by adjusting the values of the original weights. Like if the system knows that it failed on a few occasions because of some faulty weight, then it can calibrate the value of that faulty weight on the basis of its findings.These systems need a limited amount of primary memory and storage to function.

The other class of systems is different. These AI systems are capable of much more than the previous class. These systems are also capable of re-calibrating the weights of the existing neural networks, but they are also capable of generating new neural networks and expanding the existing ones.For example, lets take a Humanoid Robot. This robot knows only a few things at the beginning. Now as it starts its operation, it is going to learn new things. The amount of knowledge required by a humanoid to function is so large that it is never possible to incorporate all the knowledge at the very beginning. Hence the humanoids start functioning with a minimal amount of knowledge and they are equipped to learn new things on their own. Now suppose that the humanoid comes across an entirely new thing. As it learns how to do it, it builds a new neural network based on the knowledge that it gathers. Hence as it learn new things it keeps generating new neural networks. The humanoid may also extend its neural network when it learns a new class of something that it already knows. Like when it knows how to cook plain rice, but it recently learned how to cook fried rice. So it will add some new nodes to its existing neural network so that it becomes more versatile.

Its for these systems that the memory limits impose a restriction on the functioning. In the beginning, we are unaware of how a humanoid will learn things and at what pace it will learn them.And even if we gave it the highest possible memory chips available, the thing will not suffice. The problem is that humanoids have to mimic the humans and we human beings have literally got an infinite amount of memory. Our brains are so powerful and our memory capacity is so vast that no humanoid can even think of matching it(at least for the next decade or so). Now, although we use only a limited proportion or our brain, but we are sure that we will never run out of memory. But that is not the case with our humanoid. It has to build new neural networks and it has to expand the existing neural networks that it possesses. So as it starts learning, it has to learn a lot of things and it has to retain most of what it learns.The humanoids that were built till date, started to learn at exponential rates until they either had to shut down due to lack of memory or they learnt what they intended to learn.

All research humanoids were started with a minimalist knowledge and as they started interacting with the world, they started learning new things. But the problem is that the algorithms are not that good at telling as to when they have to stop. As a result they learn a lot of things and keep generating more and more networks and keep expanding their existent networks. As a result, though they begin learning at a good rate, but eventually they always fall short of memory. This happens because we human beings know our limits, but the humanoids don't. They fall short of both the primary memory as well as permanent storage. As they expand neural networks, they have to retain the existing networks in their current memory and they also have to keep using the updated networks if they have to continue doing the job that they were learning. Hence the amount of the neural network that can be retained in their memories crosses the threshold.Moreover, humanoids are inherently multitasking and therefore, they have to keep multiple neural networks while solving problems.

There have been a few solutions to the problem of limited primary memory. The modified algorithms can help the humanoids in deciding that what portion of the neural network they have to keep in the current memory. But even in that case, we are eventually going to reach a bound.

The second problem is that of permanent storage. As the humanoids keep learning new things, they have to store the enlarged and new neural networks so that they have this acquired knowledge for the future use. As a result of this, they have to keep storing the knowledge that they acquire with time. Hence with time, the permanent information with every humanoid also increases.Imagine how much of knowledge a humanoid would be having.

Lets try to get a guess at the magnitude of information that we are talking about. If a humanoid has to learn making coffee, it will be having knowledge on how to see things, how to distinguish between coffee and assisting ingredients with the other objects in the world.Then the humanoid would also be having knowledge about how to make coffee. Now the problem of recognizing objects in the world itself is a big one.Learning by examples is a phenomenon that is used in case of objects. So if the humanoid has seen a coffee jar, it will store it in form of the height, weight, and other visual and physical aspects of the jar. But if the jar were a bit different, it will have to add additional information to it and will refine the existing class of coffee jars to add this new information.So with time, the humanoid will refine its classes of objects by adding more information to them and it will also define new procedures for the new things that it will learn. All these things will be stored in different forms. Like the classes will be stored as a set of facts, whereas procedures will be stores as a set of steps.These are stored in different forms which are either entirely new, or are a variation of the neural networks that we discussed.Irrespective of the form in which we store this information, the amount of memory needed is humongous.And mind you, this amount of memory were talking about here, is thousands of Petabytes in case it were to learn and retain most of the things.

So, is it possible to put that much of memory in a Robot that looks like a human.Not in the current times, that's for sure. But a modification of client-server architecture can be used for regularly transferring tonnes of information from the humanoid to some remote storage where this much of memory is available. Of course, given the network bandwidths of the current times, a single transfer would take a considerable amount of time. But we have none other option, as of now,The problem here, would arise when a humanoid has to perform some action and it knows that it has the knowledge for solving that. In that case, if the neural network or portion of neural network needed for solving the problem is within the local storage(storage within the humanoid)then it's okay. But otherwise, it will have to access the remote repository of data where it has stores all the knowledge that it gathers. In the latter case, imagine the time for which it will have to wait before all the needed information becomes available and it can start acting.

So where's the jinx. Well, the conventional methods and modifications of these conventional methods don't seem to be offering any viable solution to this problem. But there does exist a solution to it. This solution is incorporating Self-Awareness in AI systems.

As the term suggests, Self-Awareness means that the humanoid or the AI system will become aware of its own existence and of its own limits. Obviously the system is aware of what memory capabilities and processing capabilities it has, but here the emphasis is on being aware about how much it is capable of learning just like we human beings are. In this case, every humanoid will start learning at an exponential rate. As it will encounter new problems, it will gather more and more knowledge by its interaction with the world.
But as it knows about itself, it will keep deleting obsolete and temporal knowledge with time and it will also learn only a portion of what it would have learnt in the previous case. The learning by example method, would have made it classify coffee jars and that was an effective means of learning, but now that it is self-aware it will also include only a few aspects which it considers to be necessary. It does this by keeping its memory limits in mind.This is just analogous to a student who, while reading a chapter makes a note of more important things and empahisizes more on them. Likewise, the self-aware humanoid will grasp only the important aspects and will store them only. Later on, while attempting to solve the problem, if it fails, then it tries to grasp the other things that it believes that it missed out on, in the very first attempt.

Hence, the system which previously used to gain a lot of knowledge at the first attempt and was sure that it will be able to solve the problem when it encounters it again, tries a little bit on its luck and gains only partial knowledge. Now as this humanoid fails again and again, it keeps on improving its knowledge base. Eventually when the success rate goes above a threshold, it knows that it has gained enough of knowledge and stops adding more to it. This is the essence of self-awareness. It should know when to stop and that's why the threshold values should be chosen very carefully. Hence, the robot begins to learn in a way in which the human beings do.Every human being tries a thing 2-3 times before he/she starts to succeed and that is how the humanoid would be working now.Another aspect is that with time, the humanoid would become aware of what its skills are and it will be able to guarantee some success in those domains. With time, it will keep refining the knowledge base by adding new knowledge and dispersing the unused and obsolete knowledge. In this way, although the effectiveness will be reduced because if there was a problem that it solved way back in history, then it might take a lot of time solving it again because the existing neural net was deleted and the problem will have to solved from the scratch,but the system will need much lesser memory than before.

This concept cannot be used in places where success rate is critical, but it can be used in humanoids that mimic the life of a regular human being who is in training phase.Even after being self-aware, the system will be needing a little more help from technical advancements, because even with these mechanisms, the amount of permanent information needed would be difficult to incorporate in a machine of the size of a human being.

At the end, I have to tell you that this post is by no means exhaustive. Its just a small snippet of a big big research area. A fully exhaustive post would have taken at least 200 pages, and that's why the post is a seriously scaled-down version of the same. I just wanted to share these enticing insights with you and wanted you to share that exhilarating imagination with me. That was the sole purpose behind putting this post..

Thanks for your patience.





Thursday, 20 October 2011

Building a smarter planet(The quest for information)

Every one of us is surrounded by a pool of information-emitting entities. Perhaps, this is the first time that you have heard that term, but in reality, it has been there ever since the inception of this planet. Consider our bodies or, for example, the body of any living being. As a whole, we don't seem to be emitting any information apart from the regular biochemical excretions and lingual/non-lingual communication. But, surprisingly, that very body emits a hell lot of information every now and then. And the EEG and ECG are classical examples of how to capture that information and use it for some useful purposes.









So, now that you have a brief idea of what this post is all about, lets get to the main point directly. Information, as we see it, is some useful knowledge, but that there is the flaw in the definition that we follow. What is considered to be useless till date, can become a very useful bit of information in tomorrow. The EEG and ECG, for example, would have appeared to be totally nonsense things to the doctors of the medieval era. Hence, our definition of information almost always prevents us from seeing the real picture and it almost always makes us miss out on the potential, yet untouched aspects.Lets get to some real example now.

Every computer  is composed of components like RAM,MoBo(Mother Board),chipset, Processors,Buses, cooling units,HDDs etc. All of these components add together to form the computer as a whole. Now, there are two types of signals which these devices emit. One are the digital signals which these devices use to communicate with the the other devices via the bus, and other is the electrical supply which is used to run the individual components. Now, a lot of emphasis has been laid on how the buses should be organised and how the overall architecture has to be designed. This all is done to make the digital signals travel faster than before and also ensure that they become more effective. Therefore, all the innovation went into the improvement of the buses and communication interfaces, because it was these very things that shape the speed and response time of a computer. And in fact, there has been a tremendous improvement in these aspects. The device interfaces progressed from ATA/IDE to SATA and the bus specifications improved from SCSI to USB to the upcoming LightPeak. The magnitude has improved tremendously. But, the SMPS, which is the component that supplies electricity to all the devices, hasn't seen a lot of improvement. And as of now, there is a  very little hope that it will.

Why?, one may ask. Well, the SMPS, once it reached a stage where it seemed to be doing what it was intended to do, made the guys think that it does not need any further improvements.The only improvements added later on, were to make it comply to the latest bus and communication interface specifications. But these improvements in voltage and current specifications, don't constitute a breakthrough. But there could indeed have been a breakthrough improvement that we missed out on.

Every time, your computer breaks down, there is either some component or either some particular sub-component(resistance,capacitor etc.) that needs to be replaced.This happens when either an incompatible device is connected, or when a faulty device is connected, or when some jumper setting went wrong,or even when there was an internal upsurge. The reason why these components or sub components blow-up, is that some component got more electricity than it needed. And this extra electricity often flows through the supply wires of the SMPS. Now the SMPS is based on fixed logic, so it simply knows that how much pre-specified voltage or current has to be passed through a certain wire. And the transformers and other cut-out mechanisms used inside the SMPS help it to ensure that whatever be the external voltage, the voltage to be supplied through it would be what was in the specifications.

So, where do they miss the trick.?  Well, if all the voltages and currents are already withing place, then why do the components blow.The SMPS is responsible only for the power that is supplied to the MoBo and Peripherals, but after that, the MoBo distributes the power to the bus and the internal circuitry. Now, the reason why current exceed the limits, at times, is that either non-compliant  components are connected, or that a particular device was faulty/gets faulty and transfers more than what was needed. Now, the SMPS  is unaware of the actually connected device, whereas the MoBo can get a sound knowledge of what the device actually is. Now, if the SMPS as well as the MoBo were configured to transfer a minimal of information among them, the MoBo, could use some low power signal(driven by CMOS power) to find out the internal configuration before the actual boot-up. This low power signal would just be used to ask the individual components for their interface related information. Hence, by the time the system is all ready for a boot-up, the MoBo already has some information and it also has information regarding its own specifications. Now even if a simplistic logic is present within the SMPS, the aforementioned information can be used to find out the amount of voltages and currents that have to be transmitted through every outlet cable of the SMPS.

So, what's the real deal ? Well, if the SMPS has a detailed knowledge of what has to be transferred, it can either change its capacitance or resistance to provide this much of value, or it can simply cut-off just to prevent damage to some component(s).So, compare this with the previous situation. The former SMPS knew just how to to supply some voltage and current across its wires.Whereas,our new SMPS, is aware of the overall computer configuration and it can change its internal configuration so as to ensure that the voltages that it supplies do not blow away any components.Hence the previous static SMPS that had a very limited knowledge now becomes a smart SMPS that knows a lot about the computer system and  can change itself accordingly. Hence, with just a little bit of knowledge about how the system is configured, the SMPS and MoBo will be able to ensure that the computer system never breaks down.This was just the matter of harnessing some information and harnessing it correctly. Although, the computer BIOS always automatically gets updated when the configuration is changed, this update takes place during the boot-up, and hence, if any of the devices is wrongly configured or if any non-complying device is connected, then that will blow away there itself. However, the suggested method will be like a system diagnosis even before it actually starts and hence prevents any faulty configuration from running.Hence, a static computer system will become a dynamic computer system that could adjust itself according to the different h/w connected to it.

Now, there is no doubt that the costs will go up by addition of this extra logic, but won't a user be willing to spend some extra cost for getting a computer system that is as infallible as it can get. Although, even this computer system can fail when there is a problem with the initial logic or the SMPS, but the individual components and more importantly, the data, will stay safe.In fact, a few IBM laptops even have a BIOS presetting feature that solely runs on CMOS battery power. But the idea suggested here, is much more effective.

So we we had an example on how information, which was always there, but was always unattended, can be used to make an "invincible" computer system.In fact, this was just one of the several ideas. Some universities and R&D departments of organizations like IBM have already come up with a whole list of such things that they are working on.Some of these things are :

1. Tracking every piece of medicine as it goes from manufacturing units to inventories to supply chains and finally to the stores.In this way, information about the medicine's lifetime can be used to counter adulteration, and repackaging of old medicines.


2. Collecting information(EEG,ECG patterns, Breathing rate,temperature variations, movements, growth and some miniature signals emitted by the body)for a newborn baby and combine it with information collected from his/her DNA to find out the potential of any future diseases or any abnormalities.

3. Making the Electricity supply of Metropolitan more smart by making every grid and every transformer keeping a local computer informed about its current state. In this case, if any grid or transformer crosses its limits or senses that it is about to cross its limits, it can either shut down to prevent total breakdown or it can ask  the computer to update the configuration by balancing the load. All such local computers will connect to a central power distribution network that may be regulated by humans or by some other powerful computer itself. In this way, all the systems will remain up for most of the time and potential breakdowns can be prevented. In fact, these computers don't need to be complete computers. They will be a minimized and specialized version of a full-fledged computer.


This is just one list, but in reality, we can take information out of everything that we come across. Of course, the implication of the use to which that information will be put, is very important, but if we start looking the world from an entirely different perspective, then ,most of our problems can get solved.Its just a matter of "Thinking Differently".


Monday, 17 October 2011

Artificial Intelligence: The Unforeseen Consequences

 The simplest definition of artificial intelligence or AI is that it is a science that tends to make computers behave and act in the way human beings do, and it has been this very definition that has attracted various scientists and engineers from around the world to work on this domain. AI, ever since its inception in the 50’s, when the first thoughts of developing such systems were conceived, has been a very fascinating domain of study – one that is considered to be very different from the others because of the very approach that is followed to model AI systems. The AI systems are different from the normal ones because of the fact that these systems approach towards a solution in the way in which we as human beings do, whereas the conventional computing systems approach towards a solution in a rather rigid and procedural way. Whereas the conventional systems can solve only those problems that they were coded to solve, the recently developed AI systems can generate theorems and can prove the same. It is this very aspect of AI systems that had got them a separate place in the world of computer science.

To most of the readers, AI systems primarily comprise of robots as this has been something that has been always highlighted. But, there is a lot more to AI than just robots. The umbrella of AI contains Expert Systems, Theorem Provers, Artificially intelligent Assembly lines, Knowledge Base Systems and a lot more. Although all these systems have got varying architectures and very different characteristics, but there is one thing that ties all of them together – their ability to learn from their mistakes. AI systems have been programmed to find out if their attempt on doing something resulted in a success or a failure and they have been further designed to learn from their failures and use this knowledge in their future attempts to solve the same problem. The real life example of this was when the IBM computer Deep Blue which was programmed to play chess beat the then international chess champion Gary Kasparov in 1997. Deep Blue actually lost in its previous matches that were played with people who knew the moves of Kasparov, but it slowly and slowly got to know as to which moves are favorable and which are not and it used this knowledge to beat Kasparov in the actual match up. It has been this very trait that has made designing AI systems both difficult and at the same time challenging.






Computer Scientists may argue with the next point that I am going to put, but it is something which has always concerned some ethical thinkers and some other people from the science background. Although AI promises to do a whole lot of good to the human race, but at the same time it brings a risk with its massive scale implementation. AI systems on one hand can help our race by managing knowledge for us, exploring new scientific concepts, assisting us in our day to day jobs and a whole host of other reasons. But on the other hand they pose a threat to our own existence. As pointed by the articles of Hubert Dreyfus and John Sutton of the University Of California, Berkley, the rate at which capabilities of the AI systems is increasing can be dreadful. According to them we are not very far away from the day when AI systems will become better than human beings in performing almost any task. We already have AI systems that can perform not only more efficiently but even more effectively in various fields, than human beings. Such fields are currently limited to analytical reasoning, Concept exploration, Logical Inference, Optimization and Concept Proving. Although at this point of time, this list may seem a bit restricted and may not bother a lot of people, but the next generation of AI systems that will be designed for particular domains, will expand this list in a very big way. In the nearby future we are going to see systems that will be capable of programming a system on the basis of Pure Structured Logic, systems that will be able to replace doctors in a few critical surgeries where doctors haven’t been very successful and systems that will be able to do space exploration on their own. In fact such systems have already been implemented but they were assisted by human beings at some point of time. Now one might ask as to why such systems were not developed in the past when they were thought to be developable. The answer to this is that there are certain hardware characteristics of such systems that proved to be the bottleneck. The above mentioned systems need to have very high processing power to support run time reasoning, decision making and logic designing and they also need to have a very large memory to support the massive amounts of information that such systems will have to process. Such systems also need to have a large storage so that they can store whatever they have learnt. Till a few years back the amount of processing and the amount of memory support was no way near what is actually required to make such systems. But now with the inception of multiple core processors and with the recent breakthroughs in memory technology, the amount of processing and the amount of memory available per unit chip space both have gone up. And as a result we are finally able to see such systems coming into action.



Now, with such systems coming into action we can expect such systems being actually used in the field in 3-4 years from now and going by the past experiences of similar trials and advents of similar systems , such systems will indeed outperform human beings in the fields in which they will replace them. And if this turns out to be the case, we are going to face the biggest problem that we have faced till date – massive scale unemployment. Managers, who are always hungry to get more efficiency and more effectiveness without much of a demand, are going to be the first ones, who will prefer such systems over human beings. They will get what they always wanted to get and they will stay happy till the day when they themselves will be replaced by such systems on the orders of the still higher level managers. The whole hierarchy of work flow will then comprise of AI systems. This may seem to be a distant reality but going by the predictions this may actually happen. The sales of the organizations will indeed go up. Companies will be getting profits as high as they had never expected to get but on the other hand the governments will be struggling to cope with the all time high unemployment figures. The nations that will be able to cope up with this surge by passing the appropriate regulations will be the ones that will eventually sustain and the ones who will fail to do so will be drowned into a state where the economy will be on its peak but the society will be on its all time low. The whole balance of such nations will be disrupted and the overall administration will become a total chaos. Planners will be clueless as they will be encountered with something which they had never ever faced before and the Leaders will be clueless as they would have no one to assist them in decision making. In short, the whole world may lead towards an irrecoverable disaster. As of know when we haven’t seen such systems yet, this all may seem to be a bit of a framing but then ask your Grandma how she felt when she saw the television for the first time.