Thursday, 10 November 2011

Predicting Earthquakes In Advance

Surprised after reading the title of this post ? More than that, you might be wondering if it is possible. Well, this may turn out to be a reality in the near future.Earthquakes, are perhaps one of the most devastating forces of the nature. Earthquakes, ever since the inception of civilizations, have claimed countless lives and have resulted in heavy damage to property. Whereas, damage to property can be controlled by making earthquake-resistant structures, loss of life can be ensured by both, making stronger structures and finding ways of predicting the earthquakes, well in advance.

Now, the questions is, how ? Earthquakes are of two types - shallow ones and deep ones. Shallow earthquakes are the ones that originate within a depth of around 300 KM beneath the surface of the planet, and Deep earthquakes are the ones that originate at larger depths. The reason behind the shallow earthquakes is very well understood, however, there is no clear-cut explanation for the Deep Earthquakes. Hence the concept that will be used to predict earthquakes is applicable only in the case of shallow earthquakes. Moreover, shallow earthquakes cause more loss and wreaken more havoc than the deep ones.







Now, the principal point is that, the shallow earthquakes have got a definite relation with seismic activity and seismic waves.They are basically the waves which originate because of the movements inside the Earth.The seismograph shows more activity in case of actual earthquake and the Richter Scale measurement of Earthquake magnitudes, is actually the magnitude of largest variation on seismograph recorded during the Earthquake's span.The monitoring centers throughout the globe, keep recording the seismograph of corresponding zones, and this seismograph easily tells when we had an earthquake.The monitoring centers are placed after analysis of tectonics(There are several tectonic plates inside the earth and the shallow earthquakes are related to the collision and other interactions amongst the tectonic plates. This is tectonics).It also gives us information about the various parameters regarding the geographical area to which they pertain. Like, you must have heard about the danger zones, in terms of probability of occurrence of an earthquake. Countries and states are divided into seismic zones.Some zones have a high risk of seeing an earthquake than others and some zones are also likely to see more powerful earthquake than others. This zonal distribution turns out to be very useful during planning . Such zones are made after analyzing the seismic activity over a long period and also after analyzing the tectonics of that place. Like places that are closer to the meeting point of two tectonic plates are at a higher risk .

So, the question is, can't we make better use of the seismographs and use them for better things than just planning zones? Well, seismographs may turn out to be the biggest boon for mankind Seismographs are formulated by measuring the strength of seismic waves and they are analyzed across various parameters.Seismographs are recorded at all times, and most of the places in the world will have a large database of seismographs by now. Now, there are two suggestions for making use of these seismographs in predicting future earthquakes - A statistic-based philosophy and a Data Mining based philosophy. The statistic-based philosophy is a conventional one.The seismographs of all the years till now are analyzed and the values of various parameters are calculated. The values during the earthquakes are given higher weights while compiling a area-based formula that can be used for predicting earthquakes. The current values of the formula's application help us in finding out if we are nearing an earthquake. Now, the disadvantages of this approach are :

1. values in future may be similar to the past values just by chance and hence may turn out to be false predictors, in the end.

2. the approach might predict an upcoming earthquake, but it will not predict it well in advance and the authorities may not get the required time for letting people know of the same.

3. every statistic approach has its own disadvantages.

4. if there were heavy variations in parameter values during earthquakes, then the formula for that area would be very fragile.

5. The process used for computation of the formula is based on knowledge of a subject that is not well understood. So, the approach is not perfect.

The second approach though, is the one that should interest us the most. It is based on Data Mining. Data Mining is basically a phenomenon, in which tonnes and tonnes of existing data is analyzed by a data mining program and the attempt is to find out some hidden and potentially important information. This information may be in terms of hidden relationships between different items or may be anything else that holds a lot of value for the organization to which the data belongs.Data mining can only be done when you have tonnes and tonnes of data to mine. Just to give you an example, consider a Departmental store. The departmental store sees tonnes of visitors everyday and all information regarding all the billing gets stored in their databases. When the database grows large enough, it is combined with the even older databases and all the billing information stored till date is moved to a Data Warehouse(it is just like the Data Archive of a organization). Now, the departmental store wanted to find out any hidden information from this archives(since the archive is humongous, manual mining is not an option).They run a data mining tool on this data and they find out that about 50 percent of the users who brought bread of A brand also brought cheese of B brand. This is a very value information in this context. The store may give an offer, where a combo of A brand bread and B brand cheese is given. Now since, 50 percent of the users were already loving this combination, a great percent of those who haven't tried it yet, will also have an urge to try the new combo. The store can reap in huge profits like this. That is Data Mining for you.

So, in context of Earthquakes, what has this Data Mining got to offer us? Well, it can do wonders. The thing which has long been recognized by seismographic experts, is that seismographs of most areas might show some specific behavior just before the earthquakes.Now, we don't know how long this behavior lasts or what sort of behavior it is. But we do know one thing.We have the seismic activity recorded, both in terms of graphs and also in terms of values. And we can also assume that we have a significantly large seismograph database . Now, by nature, seismographs are going to give you a lot of data. Seismographs are continuously recorded all the time.We have a sufficiently large  number of data mining tools for both, mining graphical data and mining numerical data. Hence,if there is any behavior, Data Mining will find it and tell it. In terms of graphical data mining, the tool may come up with some pattern that was experienced some time before the earthquakes or through some time before the earthquakes, and in terms of numerical mining, the tool may come up with a set of values that was seen some time before the earthquake, or may even come up with some averages.Hence, if we do have some pattern in the seismographs and we have appropriate seismographic data for some area, then an effective data mining process will always come up with this hidden behavior and experts can use this information to formulate models.In fact, Data Mining tools, also provide the lower level details behind its findings and help the experts in making detailed models.A different program may monitor seismic activity against this model and report results to experts at all the times. Now,it does not matter that whether the behavior was transient or it was prolonged.If there was a specific behavior, Data Mining will find it.The strength of Data Mining lies in the Artificial Intelligence that the various tools possess. Data Mining tools use neural networks, genetic algorithms, cluster algorithms and various other approaches to analyze the data across various dimensions and come up with hidden information.But,this approach too has a few drawbacks :

1. The behavior may not be very useful if that was exhibited just a few seconds before the earthquake.

2. The Data Mining tools take a lot of time for mining information, hence using the tool on the go, is not possible. One has to properly plan that when the latest Data Mining session has to be run and after collecting how much new data, should it be run.

3. Data Mining, at times, may come up with a lot of possible alternatives for explaining a particular piece of information. This is not the fault of the tool, this is a because of the nature of the case. In this case, experts will have to use their knowledge to reduce the number of cases to formulate the final model.

So, the best thing that we can do, is to combine the first approach with the second approach and make a combined model that can be used for predicting earthquakes. There's no doubt that a lot of capital and time will be spent, but just imagine the benefit it has for the mankind.Some research has already started in this field. A team from Indian Institute Of Technology(IIT),Hyderabad is working on a project, where several small sensors will be placed in the Himalayan belt and Data Mining will be done to predict earthquakes, a day in advance. The sensors are from Japan, so their teams too are a part of this. Teams from other IITs will also be contributing. The project will get into full flow by 2015.Some more research from other universities throughout the world, is under way. We can just hope that this research comes up with some encouraging results and gives us a model by using which, areas from all over the world, can find out if an Earthquake was approaching, and that too, well in advance. Just imagine the world then. That is what technology can do.








Saturday, 5 November 2011

The Future Of Secondary Storage: Bio-Storage

We've seen floppy drives, Pen Drives, Zip Drives, SD Cards, Magnetic Tapes, Magnetic disks, Optical Storage and what not. All of them are different modes of permanent storage and all of them use different technologies and vary in the amount of data that they are capable of holding. But guess what, the future storage medium would be a living being ! Amazed ?

Yes, the future storage medium will very likely be a living being. First, scientists from Keio University ,Japan and later , a group of students from Chinese University Of Hong Kong(CUHK), have shown that how data can be stored in Bacterium. Actually, the data has been stored in the DNA of the bacterium. The more encouraging stride was made by CUHK . They were able to store the Einstein's famous equation E=MC2 in the DNA strand of the e.coli bacterium.






The logic is very simple, although the implementation is somewhat tedious. The DNA is made up of millions of pairs of Adenine, Guanine, Cytosine and Thiamine. These are the chemical bases that pair up to form the DNA.  These nucleic acid bases pair up in different ways to give every individual his/her own unique genetic make-up. Its in this very property, where lies the secret of storing data in DNA. A mapping scheme is used to map the bits of data into these chemical bases that form the DNA. Different mappings can be done where one can assign different meanings to the nucleic acid bases of the DNA.And then a reverse mapping scheme is used to map the data back to normal form, for retrieval.

Now, the implementation is tedious because :

1. Mapping is time consuming and the equipment that will be used to do the mappings will be both costly and complicated.

2. DNA strands are humongoulsy large and hence look-up time may get very high.

3. Bacteria are very delicate, and if the bacterium culture gets infected,then the data may be lost forever.

4. Bacteria can infect human beings and hence the user should be very careful while working with such storage media.

5.Bacteria can mutate and hence change their DNA, resulting in the resident data becoming corrupt.

But the advantages of this scheme can very well outweigh the points mentioned above :

1.One can very easily make cheap copies of tonnes of data. This is because bacteria reproduce and make identical copies of themselves.So, if we had stored data in the DNA of a  bacterium and that bacterium reproduces, then the new bacterium will also get the same DNA , and hence, we will get a new copy of the original data at a very minimal cost. Data copying is somewhat costly with the existing storage media.

2.A very invincible shuffling encryption scheme is possible with this method. This is because one can use multiple mappings that yield the same meaning. Hence, it is possible to use an encryption mechanism , where instead of storing the data according to the actual map scheme, it is stored with a different map scheme and the mapping equipment can help the user get the actual data by shuffling the encrypted data.

3. Data stored in bacterial DNA cannot be hacked very easily, because the data will not be directly accessible. The data is stored in DNA of bacteria that will reside in a culture. This culture will be connected to the equipment that holds the mappings and hence this mapping equipment can serve as a firewall.One can have a mechanism, where the user will have to assign a separate passkey for accessing the equipment. Hence the storage is much more secure than the existing storage media.

4. The main benefit with this storage is that the storage space is humongous. CUHK showed that they can store 90 GB of data in 1 gram of e.coli. The conventional secondary storage technologies are nowhere near Bio-Storage, as far as the memory density is concerned. In fact, the CUHK group believes that with proper implementation, one can store 900 TB of data in one gram of e.coli. Just imagine that !





Currently, this technology has been used to store copyright information in bacterium that are discovered. This is a very small information in comparison to what can be stored. But research in this field is moving very fast, primarily because of the fact that the conventional storage technologies are about to reach their limits of density. Some people believed that the bacteria are too delicate to store data in them and they can never be as reliable as magnetic disks or optical storage. But, some bacterium can be very very resistant. In fact, the Deinococcus Radiodurans bacterium can even survive a nuclear radiation. Hence, if we store data in this, the data may remain safe for thousands of years.




Although, its very difficult to predict the exact future of this technology, but it certainly has a lot of potential. And, if the scientists take this technology a little more seriously, then you never know what they can make of it.







Monday, 31 October 2011

The Human Brain: Your very own supercomputer

We get amazed after hearing about computers that have unbelievably large memories and processing speeds, but we fail to recognize that the best supercomputer was always within us- our brain.Yes, that's true.You must have heard of computers having thousands of Terabytes(1 Terabyte(TB)= 1024 Gigabytes(GB)), but when we talk of the memory size of the human brain,we need an even bigger unit - the Petabyte(1 Petabyte(PB)= 1024 TB). In fact, the best approximation of human memory is 2.5 PB(2.5 * 1024 * 1024 = 2621440 GB) .Wow, that's something. In analogical terms, this, of course, is the secondary memory, i.e the memory in which we store permanent data just like a computer does in HDDs.

Actually, the brain is composed of around a 100 billion neurons, on an average, and all of them connect to form an interconnected network. On an average, each neuron connects with 1000 other neurons. These neurons respond to the neural impulses that the brain receives from the nervous system, which itself is handled by the brain. In simple terms, its safe to assume that every single neuron is a tiny computer in itself. It has some processing capabilities, it has some memory and it treats the neural impulses just like computer responds to commands.All of these neurons connect and coordinate to make up for the overall storage that we talked about in the first para, but the processing is divided among each and every one of them.

How a neuron works, is a very complex thing, but the overall system is such that tasks are divided into portions and the neurons individually solve these portions. Who divides the task into portions - another set of neurons. Hence, as it goes, different neurons assume different roles as and when demanded. Some of them would do the division of work, some of them would do small scale computations and some of them would add up the results to form the overall results.Every neuron is capable of doing any function, just like the processor units in supercomputers.

Actually, supercomputers are composed of processor and memory arrays in such a way that processors may have their separate memories that they can use for calculations. In this way, any processor can be programmed to do anything. The same goes with neurons of the brains.The processing of data, is more or less the same. However, the supercomputer processor units don't have anything to do with permanent storage(secondary memory). Secondary memory is organized as separate RAID/LVM arrays with separate memory controllers for access. Hence, the neuron, which can handle all the departments on its own, is more superior than supercomputer units.

Now, the concept of primary memory is not applicable in brain, because a the brain is always online. Hence, logically, the brain treats short term and long term storage alike, but its just that it either disposes or retains the short term storage as per the demand. For example, you make a big calculation which involved prediction of sales on the basis of some market values. Now you start working with some predefined calculation logic and the market values. The logic is already there in your neurons, and the values are loaded temporarily. The neurons work out on the calculations and give you the end results.Now while making the calculations, you would have made some sub-calculations like carrying of digits, adding multiple figures to get a single figure.Now, you will hardly refer back to the carries that you made or the intermediate results, so after some time, the brain will flush them.Whereas, the final figures results will be retained for a longer time, as you would use them again. The logic will also be retained because it was there, and since it was used here, so it may be needed again. So, in short, brain stores important things and discards the less important ones.Importance is determined by how frequently you used that thing.So, we can assume that the neurons store all the information, both permanent and temporary, and both of them are equally accessible, unlike computers, where primary memory is more easily accessible than secondary.

Now, if we assume that some neurons store permanent data and cannot participate in calculations, even then we have tonnes of neurons which can. Even if we take 1 percent of the total to be available, we have 26214 GB of memory available for computations. This is analogous to RAM of supercomputers. Ever imagined a computer with this much of RAM.Moreover, every neuron is capable of processing the data that it has and the data that it gets from the signals. Hence if we assume an analogy here, then the processing capabilities of the brain would be at least 100 times of the latest supercomputer by Cray. Supercomputers may have specialized architectures,separation of floating point operations, or even AI based task-allocation schemes, but no supercomputer can,at least for the next 10 years, even think of matching the human brain. You are not aware of this, but let me make you get some closer.

When you see things, you are using the feed from a high-definition camera- your eyes. The camera is so good that the transfer rate is in GBs/sec. Same is the story with your ears and other sensory organs. The brain keeps getting data from these "peripherals" and keeps processing it. It also stores the important impulses and stores a large amount of them in the subconscious memory. In short, even when we are doing nothing, the brain is working on TBs of data. Now when we do some extensive calculation tasks, we are using just a small portion of the available capabilities. As per a research, the average human being uses only a percent or 2 of his brain's capabilities. Einstein used around 3-4 percent, though. And everyone knows what he did. So, if just by using a mere 3 percent of such a large setup, a man was able to change the world forever, well, you can imagine what will happen, if someone uses all of it.

Brain is in fact a very complex computer. You must have heard of the dream theories. Dreams are thought to be brain's own creations. Brain simply combines stored things from the past and creates new meaningful things. Trust me, no computer in the world is capable of doing it. So, I just want you to speculate for a while on the amazing capabilities that this organ in our skull possesses. Yep, you are the owner of the best computer in the world.







Saturday, 22 October 2011

The memory bound on AI systems: The move towards self-awareness

Often a lot these days, we hear about the memory explosion in the world of computers. This explosion basically refers to the increase in the amount of memory per unit size.And there indeed has been an unanticipated memory explosion.Recall the days of SDR SDRAMS where computers used to have 64/128/256/512 MBytes  RAM. And compare that situation with the current times. The computers of the present era have the DDR SDRAMs. General RAM sizes have gone up to 8 GB/16GB/64GB and even more than that in specialized architectures.Even in terms of storage, there has been quite an improvement. There has also been an explosion in the context of processing speeds. Compare 686 series with the i-series and you will find a breathtaking improvement.This was in context of the general scenario, but in case of AI systems, we generally use specialized architectures like the ones used in supercomputers. In such systems, primary memories are in terms of TeraBytes and secondary storage is in terms of Petabytes.Even the MIPS rates are much much higher than the general computers.But even after all these improvements, in case of generic Artificially intelligent systems , the processing speed explosion may suffice, but the explosion in memory(both primary memory and secondary) ain't that good.

The reason for this is that AI systems that are capable of learning, may  need tonnes of memory to remain stable and effective. AI systems, unlike conventional computing systems, organize their memory in the form of neural networks mainly(Although there is an entire variety of knowledge representation structures that have been used in AI systems, we will concentrate on neural networks to keep the discussion simple).Whereas conventional computers have tree like directory structures and file systems, the AI systems form an entirely connected network which is much more exhaustive and much more effective in the context of what AI systems have to do.Neural networks, are an imitation of the human brain. A neural network is composed of nodes and connectors just like the neurons and connectors in our brains.Like the impulses that are transferred between the different neurons of our brain, the nodes of a neural network too transfer signals(information) among them.

Now we will try to see how a neural network actually works. Look at this diagram :


This is the diagram of a basic neural network. Every neural network has 3 layers of nodes : input nodes, hidden nodes and output nodes. Input nodes are passive, which means that they do not contain any information and that they do not manipulate the information that comes to them. The input nodes simply pass on the data(data means variables.we will consider these variables to be numbers in this case) that they get to the many connectors that leave them. For example, in the above figure, look at the first input node. It gets a single variable(X11) and passes on the same to the four connectors that connect to 4 hidden nodes.


The hidden nodes(internal nodes in the middle layer) as well as the output nodes are not passive. Every connector that connects to a hidden node multiplies that value that it carries with a weight. Weight here is just a number.For example,if we had a value of 10 coming to a hidden node and the weight on that connector was 0.7, then the weighted value will be 0.7 * 10 = 7.  So what comes to a hidden node is a set of weighted values. The hidden node contains a sigmoid function which simply strives to combine all these weighted values into a single number. This number should lie between 0 and 1. So every hidden node gives an output that lies between 0 and 1.

After that, the output nodes receive values from the hidden nodes.Output nodes have multiple input connectors, but only a single output. So these nodes combine the input values to reduce the number of outputs that the network produces. Hence they too manipulate the information that they get.

There can be multiple layers of input nodes,hidden nodes and output nodes. Input nodes connect to either more input nodes or to hidden nodes. Whereas the hidden nodes either connect to more hidden nodes or to the output nodes.In this was we get a fully interconnected neural network.

So, this was how a neural network keeps information in it. The input nodes accept the raw information and the output nodes present the results of applying the knowledge. The weights here form the most important part, because it is these weights only that determine how well the results will be. So the overall problem of having effective knowledge comes down to fine calibration of the weights.

Now coming back to the original problem. AI systems are of two types. One that have an existing set of neural networks and no new neural networks are added during operation.Expert systems come under this category.Expert Systems are AI systems that are specialized for some particular task. They are just like human experts.In these types of systems, the amount of knowledge which the AI system will use is already known. In these AI systems, the knowledge is structured in the form of neural networks and as the system starts working, it starts using this knowledge to solve problems. Now as the system works, it keeps on improving its knowledge by adjusting the values of the original weights. Like if the system knows that it failed on a few occasions because of some faulty weight, then it can calibrate the value of that faulty weight on the basis of its findings.These systems need a limited amount of primary memory and storage to function.

The other class of systems is different. These AI systems are capable of much more than the previous class. These systems are also capable of re-calibrating the weights of the existing neural networks, but they are also capable of generating new neural networks and expanding the existing ones.For example, lets take a Humanoid Robot. This robot knows only a few things at the beginning. Now as it starts its operation, it is going to learn new things. The amount of knowledge required by a humanoid to function is so large that it is never possible to incorporate all the knowledge at the very beginning. Hence the humanoids start functioning with a minimal amount of knowledge and they are equipped to learn new things on their own. Now suppose that the humanoid comes across an entirely new thing. As it learns how to do it, it builds a new neural network based on the knowledge that it gathers. Hence as it learn new things it keeps generating new neural networks. The humanoid may also extend its neural network when it learns a new class of something that it already knows. Like when it knows how to cook plain rice, but it recently learned how to cook fried rice. So it will add some new nodes to its existing neural network so that it becomes more versatile.

Its for these systems that the memory limits impose a restriction on the functioning. In the beginning, we are unaware of how a humanoid will learn things and at what pace it will learn them.And even if we gave it the highest possible memory chips available, the thing will not suffice. The problem is that humanoids have to mimic the humans and we human beings have literally got an infinite amount of memory. Our brains are so powerful and our memory capacity is so vast that no humanoid can even think of matching it(at least for the next decade or so). Now, although we use only a limited proportion or our brain, but we are sure that we will never run out of memory. But that is not the case with our humanoid. It has to build new neural networks and it has to expand the existing neural networks that it possesses. So as it starts learning, it has to learn a lot of things and it has to retain most of what it learns.The humanoids that were built till date, started to learn at exponential rates until they either had to shut down due to lack of memory or they learnt what they intended to learn.

All research humanoids were started with a minimalist knowledge and as they started interacting with the world, they started learning new things. But the problem is that the algorithms are not that good at telling as to when they have to stop. As a result they learn a lot of things and keep generating more and more networks and keep expanding their existent networks. As a result, though they begin learning at a good rate, but eventually they always fall short of memory. This happens because we human beings know our limits, but the humanoids don't. They fall short of both the primary memory as well as permanent storage. As they expand neural networks, they have to retain the existing networks in their current memory and they also have to keep using the updated networks if they have to continue doing the job that they were learning. Hence the amount of the neural network that can be retained in their memories crosses the threshold.Moreover, humanoids are inherently multitasking and therefore, they have to keep multiple neural networks while solving problems.

There have been a few solutions to the problem of limited primary memory. The modified algorithms can help the humanoids in deciding that what portion of the neural network they have to keep in the current memory. But even in that case, we are eventually going to reach a bound.

The second problem is that of permanent storage. As the humanoids keep learning new things, they have to store the enlarged and new neural networks so that they have this acquired knowledge for the future use. As a result of this, they have to keep storing the knowledge that they acquire with time. Hence with time, the permanent information with every humanoid also increases.Imagine how much of knowledge a humanoid would be having.

Lets try to get a guess at the magnitude of information that we are talking about. If a humanoid has to learn making coffee, it will be having knowledge on how to see things, how to distinguish between coffee and assisting ingredients with the other objects in the world.Then the humanoid would also be having knowledge about how to make coffee. Now the problem of recognizing objects in the world itself is a big one.Learning by examples is a phenomenon that is used in case of objects. So if the humanoid has seen a coffee jar, it will store it in form of the height, weight, and other visual and physical aspects of the jar. But if the jar were a bit different, it will have to add additional information to it and will refine the existing class of coffee jars to add this new information.So with time, the humanoid will refine its classes of objects by adding more information to them and it will also define new procedures for the new things that it will learn. All these things will be stored in different forms. Like the classes will be stored as a set of facts, whereas procedures will be stores as a set of steps.These are stored in different forms which are either entirely new, or are a variation of the neural networks that we discussed.Irrespective of the form in which we store this information, the amount of memory needed is humongous.And mind you, this amount of memory were talking about here, is thousands of Petabytes in case it were to learn and retain most of the things.

So, is it possible to put that much of memory in a Robot that looks like a human.Not in the current times, that's for sure. But a modification of client-server architecture can be used for regularly transferring tonnes of information from the humanoid to some remote storage where this much of memory is available. Of course, given the network bandwidths of the current times, a single transfer would take a considerable amount of time. But we have none other option, as of now,The problem here, would arise when a humanoid has to perform some action and it knows that it has the knowledge for solving that. In that case, if the neural network or portion of neural network needed for solving the problem is within the local storage(storage within the humanoid)then it's okay. But otherwise, it will have to access the remote repository of data where it has stores all the knowledge that it gathers. In the latter case, imagine the time for which it will have to wait before all the needed information becomes available and it can start acting.

So where's the jinx. Well, the conventional methods and modifications of these conventional methods don't seem to be offering any viable solution to this problem. But there does exist a solution to it. This solution is incorporating Self-Awareness in AI systems.

As the term suggests, Self-Awareness means that the humanoid or the AI system will become aware of its own existence and of its own limits. Obviously the system is aware of what memory capabilities and processing capabilities it has, but here the emphasis is on being aware about how much it is capable of learning just like we human beings are. In this case, every humanoid will start learning at an exponential rate. As it will encounter new problems, it will gather more and more knowledge by its interaction with the world.
But as it knows about itself, it will keep deleting obsolete and temporal knowledge with time and it will also learn only a portion of what it would have learnt in the previous case. The learning by example method, would have made it classify coffee jars and that was an effective means of learning, but now that it is self-aware it will also include only a few aspects which it considers to be necessary. It does this by keeping its memory limits in mind.This is just analogous to a student who, while reading a chapter makes a note of more important things and empahisizes more on them. Likewise, the self-aware humanoid will grasp only the important aspects and will store them only. Later on, while attempting to solve the problem, if it fails, then it tries to grasp the other things that it believes that it missed out on, in the very first attempt.

Hence, the system which previously used to gain a lot of knowledge at the first attempt and was sure that it will be able to solve the problem when it encounters it again, tries a little bit on its luck and gains only partial knowledge. Now as this humanoid fails again and again, it keeps on improving its knowledge base. Eventually when the success rate goes above a threshold, it knows that it has gained enough of knowledge and stops adding more to it. This is the essence of self-awareness. It should know when to stop and that's why the threshold values should be chosen very carefully. Hence, the robot begins to learn in a way in which the human beings do.Every human being tries a thing 2-3 times before he/she starts to succeed and that is how the humanoid would be working now.Another aspect is that with time, the humanoid would become aware of what its skills are and it will be able to guarantee some success in those domains. With time, it will keep refining the knowledge base by adding new knowledge and dispersing the unused and obsolete knowledge. In this way, although the effectiveness will be reduced because if there was a problem that it solved way back in history, then it might take a lot of time solving it again because the existing neural net was deleted and the problem will have to solved from the scratch,but the system will need much lesser memory than before.

This concept cannot be used in places where success rate is critical, but it can be used in humanoids that mimic the life of a regular human being who is in training phase.Even after being self-aware, the system will be needing a little more help from technical advancements, because even with these mechanisms, the amount of permanent information needed would be difficult to incorporate in a machine of the size of a human being.

At the end, I have to tell you that this post is by no means exhaustive. Its just a small snippet of a big big research area. A fully exhaustive post would have taken at least 200 pages, and that's why the post is a seriously scaled-down version of the same. I just wanted to share these enticing insights with you and wanted you to share that exhilarating imagination with me. That was the sole purpose behind putting this post..

Thanks for your patience.





Thursday, 20 October 2011

Building a smarter planet(The quest for information)

Every one of us is surrounded by a pool of information-emitting entities. Perhaps, this is the first time that you have heard that term, but in reality, it has been there ever since the inception of this planet. Consider our bodies or, for example, the body of any living being. As a whole, we don't seem to be emitting any information apart from the regular biochemical excretions and lingual/non-lingual communication. But, surprisingly, that very body emits a hell lot of information every now and then. And the EEG and ECG are classical examples of how to capture that information and use it for some useful purposes.









So, now that you have a brief idea of what this post is all about, lets get to the main point directly. Information, as we see it, is some useful knowledge, but that there is the flaw in the definition that we follow. What is considered to be useless till date, can become a very useful bit of information in tomorrow. The EEG and ECG, for example, would have appeared to be totally nonsense things to the doctors of the medieval era. Hence, our definition of information almost always prevents us from seeing the real picture and it almost always makes us miss out on the potential, yet untouched aspects.Lets get to some real example now.

Every computer  is composed of components like RAM,MoBo(Mother Board),chipset, Processors,Buses, cooling units,HDDs etc. All of these components add together to form the computer as a whole. Now, there are two types of signals which these devices emit. One are the digital signals which these devices use to communicate with the the other devices via the bus, and other is the electrical supply which is used to run the individual components. Now, a lot of emphasis has been laid on how the buses should be organised and how the overall architecture has to be designed. This all is done to make the digital signals travel faster than before and also ensure that they become more effective. Therefore, all the innovation went into the improvement of the buses and communication interfaces, because it was these very things that shape the speed and response time of a computer. And in fact, there has been a tremendous improvement in these aspects. The device interfaces progressed from ATA/IDE to SATA and the bus specifications improved from SCSI to USB to the upcoming LightPeak. The magnitude has improved tremendously. But, the SMPS, which is the component that supplies electricity to all the devices, hasn't seen a lot of improvement. And as of now, there is a  very little hope that it will.

Why?, one may ask. Well, the SMPS, once it reached a stage where it seemed to be doing what it was intended to do, made the guys think that it does not need any further improvements.The only improvements added later on, were to make it comply to the latest bus and communication interface specifications. But these improvements in voltage and current specifications, don't constitute a breakthrough. But there could indeed have been a breakthrough improvement that we missed out on.

Every time, your computer breaks down, there is either some component or either some particular sub-component(resistance,capacitor etc.) that needs to be replaced.This happens when either an incompatible device is connected, or when a faulty device is connected, or when some jumper setting went wrong,or even when there was an internal upsurge. The reason why these components or sub components blow-up, is that some component got more electricity than it needed. And this extra electricity often flows through the supply wires of the SMPS. Now the SMPS is based on fixed logic, so it simply knows that how much pre-specified voltage or current has to be passed through a certain wire. And the transformers and other cut-out mechanisms used inside the SMPS help it to ensure that whatever be the external voltage, the voltage to be supplied through it would be what was in the specifications.

So, where do they miss the trick.?  Well, if all the voltages and currents are already withing place, then why do the components blow.The SMPS is responsible only for the power that is supplied to the MoBo and Peripherals, but after that, the MoBo distributes the power to the bus and the internal circuitry. Now, the reason why current exceed the limits, at times, is that either non-compliant  components are connected, or that a particular device was faulty/gets faulty and transfers more than what was needed. Now, the SMPS  is unaware of the actually connected device, whereas the MoBo can get a sound knowledge of what the device actually is. Now, if the SMPS as well as the MoBo were configured to transfer a minimal of information among them, the MoBo, could use some low power signal(driven by CMOS power) to find out the internal configuration before the actual boot-up. This low power signal would just be used to ask the individual components for their interface related information. Hence, by the time the system is all ready for a boot-up, the MoBo already has some information and it also has information regarding its own specifications. Now even if a simplistic logic is present within the SMPS, the aforementioned information can be used to find out the amount of voltages and currents that have to be transmitted through every outlet cable of the SMPS.

So, what's the real deal ? Well, if the SMPS has a detailed knowledge of what has to be transferred, it can either change its capacitance or resistance to provide this much of value, or it can simply cut-off just to prevent damage to some component(s).So, compare this with the previous situation. The former SMPS knew just how to to supply some voltage and current across its wires.Whereas,our new SMPS, is aware of the overall computer configuration and it can change its internal configuration so as to ensure that the voltages that it supplies do not blow away any components.Hence the previous static SMPS that had a very limited knowledge now becomes a smart SMPS that knows a lot about the computer system and  can change itself accordingly. Hence, with just a little bit of knowledge about how the system is configured, the SMPS and MoBo will be able to ensure that the computer system never breaks down.This was just the matter of harnessing some information and harnessing it correctly. Although, the computer BIOS always automatically gets updated when the configuration is changed, this update takes place during the boot-up, and hence, if any of the devices is wrongly configured or if any non-complying device is connected, then that will blow away there itself. However, the suggested method will be like a system diagnosis even before it actually starts and hence prevents any faulty configuration from running.Hence, a static computer system will become a dynamic computer system that could adjust itself according to the different h/w connected to it.

Now, there is no doubt that the costs will go up by addition of this extra logic, but won't a user be willing to spend some extra cost for getting a computer system that is as infallible as it can get. Although, even this computer system can fail when there is a problem with the initial logic or the SMPS, but the individual components and more importantly, the data, will stay safe.In fact, a few IBM laptops even have a BIOS presetting feature that solely runs on CMOS battery power. But the idea suggested here, is much more effective.

So we we had an example on how information, which was always there, but was always unattended, can be used to make an "invincible" computer system.In fact, this was just one of the several ideas. Some universities and R&D departments of organizations like IBM have already come up with a whole list of such things that they are working on.Some of these things are :

1. Tracking every piece of medicine as it goes from manufacturing units to inventories to supply chains and finally to the stores.In this way, information about the medicine's lifetime can be used to counter adulteration, and repackaging of old medicines.


2. Collecting information(EEG,ECG patterns, Breathing rate,temperature variations, movements, growth and some miniature signals emitted by the body)for a newborn baby and combine it with information collected from his/her DNA to find out the potential of any future diseases or any abnormalities.

3. Making the Electricity supply of Metropolitan more smart by making every grid and every transformer keeping a local computer informed about its current state. In this case, if any grid or transformer crosses its limits or senses that it is about to cross its limits, it can either shut down to prevent total breakdown or it can ask  the computer to update the configuration by balancing the load. All such local computers will connect to a central power distribution network that may be regulated by humans or by some other powerful computer itself. In this way, all the systems will remain up for most of the time and potential breakdowns can be prevented. In fact, these computers don't need to be complete computers. They will be a minimized and specialized version of a full-fledged computer.


This is just one list, but in reality, we can take information out of everything that we come across. Of course, the implication of the use to which that information will be put, is very important, but if we start looking the world from an entirely different perspective, then ,most of our problems can get solved.Its just a matter of "Thinking Differently".


Monday, 17 October 2011

Artificial Intelligence: The Unforeseen Consequences

 The simplest definition of artificial intelligence or AI is that it is a science that tends to make computers behave and act in the way human beings do, and it has been this very definition that has attracted various scientists and engineers from around the world to work on this domain. AI, ever since its inception in the 50’s, when the first thoughts of developing such systems were conceived, has been a very fascinating domain of study – one that is considered to be very different from the others because of the very approach that is followed to model AI systems. The AI systems are different from the normal ones because of the fact that these systems approach towards a solution in the way in which we as human beings do, whereas the conventional computing systems approach towards a solution in a rather rigid and procedural way. Whereas the conventional systems can solve only those problems that they were coded to solve, the recently developed AI systems can generate theorems and can prove the same. It is this very aspect of AI systems that had got them a separate place in the world of computer science.

To most of the readers, AI systems primarily comprise of robots as this has been something that has been always highlighted. But, there is a lot more to AI than just robots. The umbrella of AI contains Expert Systems, Theorem Provers, Artificially intelligent Assembly lines, Knowledge Base Systems and a lot more. Although all these systems have got varying architectures and very different characteristics, but there is one thing that ties all of them together – their ability to learn from their mistakes. AI systems have been programmed to find out if their attempt on doing something resulted in a success or a failure and they have been further designed to learn from their failures and use this knowledge in their future attempts to solve the same problem. The real life example of this was when the IBM computer Deep Blue which was programmed to play chess beat the then international chess champion Gary Kasparov in 1997. Deep Blue actually lost in its previous matches that were played with people who knew the moves of Kasparov, but it slowly and slowly got to know as to which moves are favorable and which are not and it used this knowledge to beat Kasparov in the actual match up. It has been this very trait that has made designing AI systems both difficult and at the same time challenging.






Computer Scientists may argue with the next point that I am going to put, but it is something which has always concerned some ethical thinkers and some other people from the science background. Although AI promises to do a whole lot of good to the human race, but at the same time it brings a risk with its massive scale implementation. AI systems on one hand can help our race by managing knowledge for us, exploring new scientific concepts, assisting us in our day to day jobs and a whole host of other reasons. But on the other hand they pose a threat to our own existence. As pointed by the articles of Hubert Dreyfus and John Sutton of the University Of California, Berkley, the rate at which capabilities of the AI systems is increasing can be dreadful. According to them we are not very far away from the day when AI systems will become better than human beings in performing almost any task. We already have AI systems that can perform not only more efficiently but even more effectively in various fields, than human beings. Such fields are currently limited to analytical reasoning, Concept exploration, Logical Inference, Optimization and Concept Proving. Although at this point of time, this list may seem a bit restricted and may not bother a lot of people, but the next generation of AI systems that will be designed for particular domains, will expand this list in a very big way. In the nearby future we are going to see systems that will be capable of programming a system on the basis of Pure Structured Logic, systems that will be able to replace doctors in a few critical surgeries where doctors haven’t been very successful and systems that will be able to do space exploration on their own. In fact such systems have already been implemented but they were assisted by human beings at some point of time. Now one might ask as to why such systems were not developed in the past when they were thought to be developable. The answer to this is that there are certain hardware characteristics of such systems that proved to be the bottleneck. The above mentioned systems need to have very high processing power to support run time reasoning, decision making and logic designing and they also need to have a very large memory to support the massive amounts of information that such systems will have to process. Such systems also need to have a large storage so that they can store whatever they have learnt. Till a few years back the amount of processing and the amount of memory support was no way near what is actually required to make such systems. But now with the inception of multiple core processors and with the recent breakthroughs in memory technology, the amount of processing and the amount of memory available per unit chip space both have gone up. And as a result we are finally able to see such systems coming into action.



Now, with such systems coming into action we can expect such systems being actually used in the field in 3-4 years from now and going by the past experiences of similar trials and advents of similar systems , such systems will indeed outperform human beings in the fields in which they will replace them. And if this turns out to be the case, we are going to face the biggest problem that we have faced till date – massive scale unemployment. Managers, who are always hungry to get more efficiency and more effectiveness without much of a demand, are going to be the first ones, who will prefer such systems over human beings. They will get what they always wanted to get and they will stay happy till the day when they themselves will be replaced by such systems on the orders of the still higher level managers. The whole hierarchy of work flow will then comprise of AI systems. This may seem to be a distant reality but going by the predictions this may actually happen. The sales of the organizations will indeed go up. Companies will be getting profits as high as they had never expected to get but on the other hand the governments will be struggling to cope with the all time high unemployment figures. The nations that will be able to cope up with this surge by passing the appropriate regulations will be the ones that will eventually sustain and the ones who will fail to do so will be drowned into a state where the economy will be on its peak but the society will be on its all time low. The whole balance of such nations will be disrupted and the overall administration will become a total chaos. Planners will be clueless as they will be encountered with something which they had never ever faced before and the Leaders will be clueless as they would have no one to assist them in decision making. In short, the whole world may lead towards an irrecoverable disaster. As of know when we haven’t seen such systems yet, this all may seem to be a bit of a framing but then ask your Grandma how she felt when she saw the television for the first time.



Friday, 8 July 2011

Microsoft Windows: Past, Present and Future

If there is one corporation in the world that has simply been ruling its arena without any real competition, then it has to be the mighty Microsoft. Microsoft Windows is undoubtedly the most easy-to-use operating system and the sales of copies of Windows in the past 18 or so years, is an indication of that. But there is one flaw in Microsoft WIDNOWS that has been there since the first copy of Windows was released. The internal mechanism, by which the kernel of the Windows OS passes control to the individual applications, is susceptible to attacks. Till now, we have seen a host of viruses for Windows systems, but only a few of them exploited this weakness. So, the biggest surprise for Microsoft is yet to come.












Whereas, the number of viruses that have been developed for APPLE Macintosh, is pretty less, the number of viruses, worms and Trojans that have been developed for the Windows, is pretty high. The answer to this question is the same as the answer the question as to why Microsoft Windows has been the most successful operating system till date. The mechanism, with which Windows Kernel makes API calls, is the answer to both questions. It is this very mechanism which makes Windows, both susceptible and at the same time, easy to use and easy to program for. Some Operating System experts trace this anomaly to the fact that Windows was a modified version of the APPLE
Macintosh. Most of us know that Bill Gates from Microsoft had stolen the logic of APPLE Macintosh to build Windows, and since Macintosh was targeted for a specific platform, but Windows wasn’t, such an anomaly had to be there. Whereas the APPLE platform has the Operating System being tied up to the hardware platform, the Windows OS can be used with a variety of platforms.




Actually Windows is based on the Microsoft’s previous OS DOS. DOS was known for being compatible with all the major hardware platforms and was a console based OS. Mr. Gates used the flexibility of DOS and combined it with the Graphical Capabilities that were inherited form APPLE MACINTOSH. Though the tie-up of Hardware and Os in APPLE has lots of advantages like increased processing, lesser vulnerability, better multimedia experience, but the rigidness has always limited the number of users who use APPLE.



But after saying that, even APPLE Macintosh has seen some families of viruses. It’s just that the APPLE family of viruses is targeted towards the firmware whereas the Windows viruses have more to do with files and registries. APPLE Macintosh and Microsoft Windows, both have their own positives and negatives.

On the other hand, the differences between Linux and Microsoft are of a different nature than that between Microsoft Windows and APPLE Macintosh. In simple words, whereas the Windows makes use of the API calling mechanism, the Linux environment uses a Shell. The use of this shell makes you configure your Operating system environment as you like. You can customize the operating system and you can replace the original system codes with your own codes. In short, when you are using the Shell in Linux, you can customize, reconfigure as well as use your operating system at the same time. Moreover, the Linux supports many more file systems than what Windows supports. Linux Os provides much more flexibility and much more security than Windows. The security part comes from Unix- the ancestor of Linux. The way in which file directories are accessed in Unix Kernel, almost nullifies the chance for some virus to cause any trouble. But, there have indeed been viruses targeted for Linux environment as well.





The last two paragraphs tell that there have been viruses for all the three OS, showing that all of them have some weaknesses. Then one could question that why does the number of viruses developed for Windows simply outnumbers the number of viruses developed for the other two. The answer is that the more popular an OS is, the more the number of viruses which are targeted for it. The virus-writers want their viruses to spread as much as they can. This can only happen, if viruses are targeted to infect the most widely used Operating System at that time. Before Windows had come into the scene, most of the viruses were developed for APPLE. And since till date,  Windows has seen the largest user base, the largest number of utility applications and the largest number of software suites, it has been the favorite OS of the virus-writers.

Though Windows could have been blown away in its early days, it was saved by its team of security engineers and the upsurge in anti-virus culture. The most fortunate thing for Microsoft was that the anti-virus culture began to pick up just after the release of the Windows. Peter Norton, John McAfee and Eugene Kaspersky came to Microsoft’s rescue. Since Windows was the most popular and most attacked platform at that time, most antivirus corporations decided to provide antivirus solutions for Windows. The Windows users who had already spent so much on getting their copy of Windows, never really bothered about the extra amount that they had to pay for purchasing an antivirus solution. The team at Microsoft also played intelligently, by providing free security patches to the Windows users, over the Internet. Though, the idea of leaving permanent vulnerabilities in the actual code, and patching code for dealing with threats as they arrive, is not good, but why would the user bother about this. The antivirus solution and the security patches made the user feel good while using Windows.



The other thing which went in favor of Windows was that it was ready to use. All the codecs were there since you had installed your copy. These codecs are not available in Linux and you have to add them by yourself. The normal user hence always preferred Windows over Linux, so what if Windows charges you so much and Linux would have cost you nothing. As the base of Windows users grew up, all the major Generic software developing corporations chose Windows as the target platform. The .EXE of Windows became the most popular file format, off late. Hence, with time, Windows became the favorite Operating System of the masses. With time, its dominance increased only.


But then, what about the problems in Windows? Careless coding which was done initially had to manifest, in the end. For a user who used his Windows as badly as he could, had to format his computer every six months. With time, registry problems, invalid file associations and viruses would slow down your processing to such an extent, that you will have no option but to reinstall Windows. This has always been provoking frustrated Windows users to switch to some other Operating System. In fact, in the USA, Microsoft began to eat into the user share of APPLE during the 1980’s. But, when the users realized
The problems with Microsoft Windows, they decided to switch back to APPLE, even though an APPLE PC would cost twice as much as one that runs Windows. Since most of the popular applications had their APPLE Mac versions, the users had no problems in switching to APPLE and they did. But the corporations who needed more flexibility than what APPLE would have provided them, stuck to Microsoft.



So what is it that made Microsoft Windows survive, even after having so many problems? It survived partly because of its luck, partly because of the corporations that depended heavily on it, and partly because of the users who got satisfied with what they had. The antivirus companies, in fact, had the biggest role to play, in saving Microsoft’s skins. But they had to do what they did. If they did not work to save Windows, they wouldn’t have got anything to offer to the customers. Hence, on one hand Windows depended on the antivirus companies for its survival, and on the other hand, the antivirus companies had to depend on the weaknesses of Windows for their survival. The patching up of security updates, over the Internet also saved Windows. But the biggest thing that came to Microsoft’s rescue was the lack of options that users had. The users were willing to spend any money on their OS, but they were not willing to spend a lot of time in setting it up. Though Linux provided them with all that Microsoft did, and that too at a lesser cost, the fear of spending time in setting it up, made them stick to Microsoft. Moreover, all the major utility software was available for either windows or Mac, and not Linux. This is how Windows survived the test of time.




Over time, users from the developed countries, where piracy is minimum and you have to spend a handsome amount to get your copy of Windows, users began to feel that if they can spend some time in learning Linux and setting it up, then not only they can save their skins by the weaknesses of Windows, but they can also save their money. Hence, the Open Source culture became much stronger than what it was at its very inception. More and more users began to use Linux and those who did so, never even looked back at Microsoft. For those applications, where only a Windows executable was available, WINE came to their rescue. Slowly and slowly, the distribution channels of Open source software widened up and more and more users started to show interest in using such software. The open source community also began to build utility software which would work on both Linux as well as the commercial operating systems. The Mozilla firefox web browser project and open source suites like OpenOffice.Org are some early examples of this. With time, the open source community got heavily into developing applications specifically for Linux and as a result, a lot of utility software became available to the Linux users. At present, Linux distributions as large as 5 GB, are available. Such distributions have almost all the utilities which one could expect a normal user to use.






 Companies, who have seen Microsoft as their biggest rival, began to fund GNU and assist it in acquiring codecs and the free CD program. The number of Linux users increased considerably in the last 10 years or so and is increasing at a very high pace. PCs that had one or two Windows versions, now had Windows + Linux in dual-boot or only Linux. With this surge in Open Source culture, players like Google and Oracle joined in. Lots of Open source applications began to develop and lots of users started contributing to the Open source community. As it stands now, the Open source culture is at its peak as far as developed nations are concerned. But as far as the developing countries like India, are concerned, the Open source culture hasn’t really kicked off. The reason is that most of the copies of Windows are pirated, rather than original. Since the users had pirated copies of Windows, they didn’t have to spend anything on it. This has made them stick to the Windows, since there was no motivation that could have provoked them to change their OS. But, even in these countries, the impact of Open source culture has been somewhat visible.

So what threat does the Open source culture pose to Windows? Well, if the GNU is able to acquire most of the paid codecs, then the Open Source OS will provide functionality equivalent to Windows Os, for free. Plus, the most popular Open Source OS Linux, is known to be much more secure, fast and flexible than Windows. Hence, once the users feel like switching to Linux, they might never look back at Windows. Of course, it’s a long time for all the codecs to get acquired by the GNU, but once this gets done, the road will get very difficult for windows. Google is about to release its new Operating System called Chrome.



 The company has spent lots of bucks and claims it to be a revolutionary type of OS. Experts believe that Chrome might be having inbuilt support for many codecs and may also have an inbuilt functionality for supporting Windows API calling mechanism(Unlike in Linux, where WINE is separately available and not a part of the OS itself.) Hence, Google Chrome OS may just do what Linux couldn’t do in the past so many years.

What all can save Windows? Its quite obvious that Windows will stay in the arena for at least the next decade, but its existing-base has already begun to shrink. One thing that can save Windows, is obviously by removing the flaws by building next Windows from scratch, But this does not seem to be a viable solution. Building the next version from the scratch will make it pretty different from the ones that we have already seen. This may render the new versions incompatible with the older ones. Hence this solution will need a lot of insight and investment, and Microsoft as we know it, would prefer continuing on the Automatic update philosophy to this. So what else can save Microsoft’s skin? The answer follows.



If there is one thing that can come to Window’s rescue, then it is the DOTNET platform. Microsoft has spent millions of dollars in developing it and is keen to spend millions more in making further improvements in it. The framework is a conglomerate of multiple programming environments such that the programmer is allowed to code in his favorite language but the overall system would generate a .EXE and a set of DLLs, so that the application, no matter what language it was built in, can be executed on Windows. The Framework realizes this by making use of transforming code from native compiler to an intermediate Language called MSIL (Microsoft Intermediate Language). Then the final compilation would include compiling MSIL code and producing .EXE and the DLLs. If your target environment is Windows, then it is much easier to code in .NET than to code in Java and other development frameworks. The platform provides much more flexibility than any other platform, just because it has been optimized for a target system, rather than being platform independent in order to be deployable on various environments. Moreover, integrating with Ms Office and SQL Server as well as the other Microsoft services, goes seamless with .NET. The company has already invested a lot in maintaining and extending the framework and the dividends have been evident. The platform gives tough competition to Java. The timing of introducing the platform was pretty tactical. Just when Java programming was at surge, Microsoft chose to tempt the Windows developers by introducing DOTNET. And with time, Microsoft has been attempting to show that it’s the best programming platform, if you are a Windows developer. All the latest technologies from Microsoft have been integrated into .NET and hence the .NET programmer has much more flexibility than a Java programmer, when writing code for Windows. As a result, Windows, which was already popular, may become more popular because of the quality of applications that you can produce for it. Currently, .NET and Java are the most widely used development platforms and as long as this stays, Windows will remain in the race.



Having said about the competition which Windows is bound to get from the Open source community, Microsoft will always have the option of doing something to delay the inevitable. The company has a great marketing wing, and the strategies which it uses can make its user base grow instead of shrinking. The company is always known to tempt the customers with great tie ups. At times, it tied up with Japanese computer Manufacturers and Intel to provide a PC that would cost half as much as APPLE PC., and at times it has tied up with Server Manufacturers to provide corporations with irresistible packages. Microsoft’s support is also something which makes users feel as if Microsoft can serve them better than anything. Though the computer companies will not be affected pretty much even if Windows gets phased out, but almost all the computer companies have reaped in heavy profits by providing a combination of Hardware that was specifically designed for Windows. The companies may also worry that what would happen if APPLE begins to reclaim the share once Windows gets phased out. If users turn to APPLE, then no one would buy those simple computers. Hence, Microsoft is due to get full support from computer manufacturers so that Windows stays alive.



In the world of computers, nothing can be predicted with cent percent surety. In 1970’s, the CEO of DEC(which was the leading computer manufacturer at that time, courtesy of its VAX computers) was asked that why isn’t the company entering mainstream PC manufacturing. He replied that “what would a normal user do with a computer?”. He said that 10000-20000 powerful computers are sufficient for the entire world. The company was blown away in the PC revolution of the 1980’s, showing that in the world of computers, lack of insight can be fatal. Windows is the heartbeat of Microsoft. If Windows gets phased out, the company will be virtually nothing in comparison to what it is today. So, at the end of it, if Microsoft shows enough insight and acts accordingly, it can make Windows remain what it is today – the most used OS. Otherwise, the company may become history, like DEC. The choice is theirs and the ball as of now, is in their court.




Tuesday, 21 June 2011

What to do with our old communication media ???


Technological advancements in the world of communications, have always been welcomed with strict evaluations of how they will change the way something was accomplished previously and how well will the new technologies accomplish those goals. There are government agencies, non-profit groups, technical societies and manufacturer forums which take good care of this evaluation and ensure that only good advancements are accepted and the bad ones are rejected. They take almost all the aspects into consideration, but they forget (deliberately, in some cases) one big aspect – how will the new technology put the existing technology’s infrastructure (as in wires, amplifiers, branch exchanges, trunk offices, circuit switching nodes in wired communications and signaling towers, repeaters and signal switching nodes in wireless communications) into use. In some cases, a new technology standard specification would give an annotation at the end, saying that the existing infrastructure can be used with the new technology, but it will never reveal the difference that would result in the performance when using an entirely new infrastructure versus using the older one and hence the manufacturers and service providers sought to building a new infrastructure from the scratch.









But you cannot blame anyone for this, can you? These agencies strive to promote new technologies and they tend to do it in a way that ensures that new technologies do sprang up. Moreover, the new technologies when given an entirely new infrastructure, have almost always offered greater benefits that what they would have offered if they were used by making modifications to the old infrastructure. Since the profits which the manufacturers and service providers incepted they would get with new infrastructure, were always higher than the profit which making use of the existing infrastructure could have resulted in, the manufacturers and service providers never really bothered about the old infrastructure. They just formulated policies that would suggest that as long as the old technology is in some us, they are ought to maintain the infrastructure which was built to support it, but the policy never states what would be done when the old technology gets completely phased out.

Though there have been a handful of examples where the old infrastructure was put to some excellent use- e.g. the infrastructure for AMPS was reused to implement DAMPS. The X.25 communication backbone which was initially meant for some conventional communication and not carrying Internet payload, was later on modified to be used as an efficient carrier of Internet payload.(In fact it is still used as a communication backbone in some regions). But the number of such examples falls pretty short of the number of examples where the existing infrastructure was left to rot or was removed to put the newer one in place.

Its not that the old communication infrastructure causes any considerable loss to the environment or add to some other issue, but it’s just that we ignore the possibility of putting it to some good use. A set of standalone transmission towers cannot cause a lot of harm to the environment, can they? (Of course, ignoring the possibility of the tower growing too old and falling on a person passing from the nearby). Nor can the bundles of copper wires cause a lot of harm to the environment. So one might ask as to what is the point in caring about the old communication infrastructure. Why shouldn’t we leave it as it is and move on to build one for the new technology? If the masters of the domain have been thinking in the same way, then why should we take a different stand on this? The experts of the domains, the manufacturers and the service providers have set this trend, because they lacked insight. They have a lot of insight, when it comes to implementing the new technologies in the optimal manner and ensuring that the overall operation is robust and flexible, but they lack insight when it comes to doing something unanticipated with the existing infra. 

The DAMPS and X.25 backbone examples that were provided at the beginning indicated a different class of reuse. They were instances when the new technology could be implemented by making slight modifications to the existing infrastructure. Though such examples are motivating, but they are very rare in the present context. If you are switching from conventional copper cable telephony to optic fiber-based communication, then there is no way in which you could use the old infra in implementing the new technology. You have to build the new infrastructure from the scratch. But what we forget here is that we can do plenty of innovational things with the existing infrastructure. The conventional copper cable-based systems are about to become obsolete. The infrastructure used by such communication was humongous and this system was undoubtedly, very reliable. Some wireless communication technologies like the DAMPS, PHS are also near becoming obsolete. So what all things can we do with them? A few proposals follow.


One proposal is that the existing copper cable communication be modified slightly and the cables which were once used to connect telephones to branch offices would be made to connect utility meters to the respective billing offices. This will obviously mean that for those houses that did not have a copper cable connection before, the corporation will have to provide one. One might argue that the data transfer rate on copper media is limited to 56Kbps, but the utility meters would send and receive much lesser data than this. Instead of providing a separate system for meter readings, the existing system can be put to very effective use. This would eliminate the job of meter reading personnel. But apart from this benefit, this system can also detect thefts, which result in major losses to the power generating corporations.

The second proposal is to use the existing copper cable infra to act as a signaling system in the modern communication technologies. GSM, GPRS and most of the modern communication protocols need a separate signaling system for optimal performance. Although, a portion of the overall infra can be used to function as the signaling system, but by making use of a separate signaling system, we can minimize noise and can ensure a higher quality service with reduced risk of network failures. Some GSM service providers, who owned copper cable – communication systems before coming into cellular technology, have indeed used this concept and have reaped in lots of benefits by doing so.

The third proposal is to provide a super secure communication by making use of any existing and obsolete transmission media. Since the early days of the science of network security, it has been realized that the most secure communication system is one in which two channels are used for overall transmission. One channel is used to carry encrypted data and the other channel is used to carry the security information. Since the amount of security information is pretty less when compared to actual data, the obsolete low data rate channels can be used with relative ease. At the beginning of the Internet era, the proposal of having a separate channel for transmitting security related information would have been considered to be a joke. But now, when we can use some obsolete transmission system to act as the second channel, we certainly consider the implementation of this proposal. This will undoubtedly mean some changes in the communication architecture and equipment, but by switching to the two-channel system, in an intelligent way, one can confine the changes to the terminal offices and the network gateways, which are easy to change than to change all network nodes or intermediate network switches. The best part is that the Internet protocols form part of a layered architecture and hence we can make changes on one layer and leave the other layers unchanged. Though the implementation of this proposal can solve many of the network security information that we have been facing off late, but the wide implementation also poses some practical challenges and problems that can be solved only when there is an equal commitment from everybody involved in the scene.

A fourth proposal is to provide the existing and obsolete infrastructure to some Cable TV company. The Cable TV companies have been making use of coaxial cables to reduce interference and the WAN concept to transmit cable signals to the homes of the subscribers. They chose this setup so that they can have overhead transmission than having underground transmission. But if these companies are provided with some old digital wireless communication infrastructure, then these companies can provide the same services with better quality and lesser clutter. Though the customers will have to spend more for the new terminal equipment, but the return which they get fro their investment is worth it. The cable companies on the other hand can provide extra services as they will now have a high bandwidth channel at their disposal. Moreover the maintenance required in normal cable TV networks will not be required in the new system. Since such companies operate in very small areas, so just one or two transmission towers may suffice, in most of the cases. This system will be somewhat similar to the DTH system. Though this system will not provide as much clarity and as many services as the DTH, but the cost of connection to this setup will be pretty less than cost of a DTH connection. 

Apart from these proposals, many more interesting and innovative proposals have been made by various research groups, but the question of feasibility in implementation had restricted some of these proposals from spreading. Anyways, no matter how many proposals we make and how good be the credibility of those proposals be, the final decision always remains at the mercy of the corporation which owns the dead network. It also depends on the willingness of the other corporation (the power generation corporation in the first proposal, for example) to replace its old way of doing things by this new and effective way. Hence, even if the corporation which owned the communication facility wants to use the existing infrastructure, it may never be able to do so because there was no one who came to ask for the services of the old network. Moreover, the type of reuse which a network can be put into differs for the type of network in question. Copper cable- based communication will obviously be less useful than some digital wireless network like DAMPS.


At the end, it is your own network so it is upon you as to whether you would use it, sell it or let it rot in the open. So the owners of the network infrastructure have the ultimate decision making power. They should allow new ideas to come up from the experts. They should check the credibility and practicality of mass scale implementation of these ideas. By making use of something that was rendered unusable- that’s how we build a smarter planet.