Showing posts with label Cognitive Learning. Show all posts
Showing posts with label Cognitive Learning. Show all posts

Tuesday, 12 June 2012

Using AI to Combat Cyber Crime

Artificial Intelligence has, for quite some time now, been used for combating credit card fraud. Data Mining, which is in a way an application of AI, is used to detect credit card frauds by using various mechanisms. In the most general scenario, a pattern of the user's credit card usage is drawn by making use of his credit card transaction records and all the future transactions are inserted in the pattern only after conforming to this pattern itself.Whenever a transaction or a pair of transactions that violates the pattern is noticed, the system prompts the surveillance personnel to check in. Then its upon the discretion of the personnel to see if the transactions are to be investigated or to be entered in the system and inculcated in the ever changing pattern.In a more advanced form,the normal credit card usage pattern of the user is used along with a pattern of the usage seen in the credit cards of the whole group to which the user belongs. This group may be created on the basis of income,credit card category(gold,silver etc.) or even the company to which the user belongs.This scheme is more robust and more resistant against single high value transactions that may appear to drift away from the pattern but are actually genuine.

The above approaches have been quite effective in combating credit cards frauds to some extent, and as a result, agencies all over the world have started looking at AI for combating other forms of electronic/cyber crime.They sought to AI because of the fact that due to the humongous number of transactions, its utterly impossible to employ humans to track movement over the internet. They need a machine to do that and in fact they need a machine that's smart enough to match the wits of a human expert.The intelligence may either be embedded in the individual application servers, just like spam filters used by mail servers or the intelligence may be implemented at the firewalls at the gateways. The advantage with embedding it into the individual servers is that the logic related to the specific application can be embedded. E.g a traffic pattern may be acceptable if destined for mail server but not for some office application server.In fact the best approach is to divide the intelligence amongst the two places. General intelligence is embedded at the firewalls and the application specific intelligence is embedded in the individual servers.

The general model suggests that some traffic analysis technique be used. This technique would differ according to the networks. Traffic could be analyzed at one or all levels. Either only the datagarm traffic could be analyzed or the ip level traffic or both. The traffic is again matched with the general pattern of traffic just like pattern matching in credit card fraud detection. At the firewalls, the overall traffic pattern is analyzed, and at the individual servers, the application level and session level traffic is analyzed. At the application level, once again two patterns could be used - a user pattern and a group pattern. At the firewalls however, a single pattern has to be used.In fact, the system may keep different patterns for different days or different times instead of a single pattern, and these different patterns may then be used accordingly.Like every cognitive learning mechanism, these patterns would also improve with time. The system would match actual pattern with the stored pattern and also keep changing according to the patterns that it analyzes. For example, if the system reported an anomaly and the network admin thinks its normal traffic, the system would inculcate this in the traffic pattern model and would improve itself. Hence, with time, the system will become more and more effective. 

Monday, 17 October 2011

Artificial Intelligence: The Unforeseen Consequences

 The simplest definition of artificial intelligence or AI is that it is a science that tends to make computers behave and act in the way human beings do, and it has been this very definition that has attracted various scientists and engineers from around the world to work on this domain. AI, ever since its inception in the 50’s, when the first thoughts of developing such systems were conceived, has been a very fascinating domain of study – one that is considered to be very different from the others because of the very approach that is followed to model AI systems. The AI systems are different from the normal ones because of the fact that these systems approach towards a solution in the way in which we as human beings do, whereas the conventional computing systems approach towards a solution in a rather rigid and procedural way. Whereas the conventional systems can solve only those problems that they were coded to solve, the recently developed AI systems can generate theorems and can prove the same. It is this very aspect of AI systems that had got them a separate place in the world of computer science.

To most of the readers, AI systems primarily comprise of robots as this has been something that has been always highlighted. But, there is a lot more to AI than just robots. The umbrella of AI contains Expert Systems, Theorem Provers, Artificially intelligent Assembly lines, Knowledge Base Systems and a lot more. Although all these systems have got varying architectures and very different characteristics, but there is one thing that ties all of them together – their ability to learn from their mistakes. AI systems have been programmed to find out if their attempt on doing something resulted in a success or a failure and they have been further designed to learn from their failures and use this knowledge in their future attempts to solve the same problem. The real life example of this was when the IBM computer Deep Blue which was programmed to play chess beat the then international chess champion Gary Kasparov in 1997. Deep Blue actually lost in its previous matches that were played with people who knew the moves of Kasparov, but it slowly and slowly got to know as to which moves are favorable and which are not and it used this knowledge to beat Kasparov in the actual match up. It has been this very trait that has made designing AI systems both difficult and at the same time challenging.






Computer Scientists may argue with the next point that I am going to put, but it is something which has always concerned some ethical thinkers and some other people from the science background. Although AI promises to do a whole lot of good to the human race, but at the same time it brings a risk with its massive scale implementation. AI systems on one hand can help our race by managing knowledge for us, exploring new scientific concepts, assisting us in our day to day jobs and a whole host of other reasons. But on the other hand they pose a threat to our own existence. As pointed by the articles of Hubert Dreyfus and John Sutton of the University Of California, Berkley, the rate at which capabilities of the AI systems is increasing can be dreadful. According to them we are not very far away from the day when AI systems will become better than human beings in performing almost any task. We already have AI systems that can perform not only more efficiently but even more effectively in various fields, than human beings. Such fields are currently limited to analytical reasoning, Concept exploration, Logical Inference, Optimization and Concept Proving. Although at this point of time, this list may seem a bit restricted and may not bother a lot of people, but the next generation of AI systems that will be designed for particular domains, will expand this list in a very big way. In the nearby future we are going to see systems that will be capable of programming a system on the basis of Pure Structured Logic, systems that will be able to replace doctors in a few critical surgeries where doctors haven’t been very successful and systems that will be able to do space exploration on their own. In fact such systems have already been implemented but they were assisted by human beings at some point of time. Now one might ask as to why such systems were not developed in the past when they were thought to be developable. The answer to this is that there are certain hardware characteristics of such systems that proved to be the bottleneck. The above mentioned systems need to have very high processing power to support run time reasoning, decision making and logic designing and they also need to have a very large memory to support the massive amounts of information that such systems will have to process. Such systems also need to have a large storage so that they can store whatever they have learnt. Till a few years back the amount of processing and the amount of memory support was no way near what is actually required to make such systems. But now with the inception of multiple core processors and with the recent breakthroughs in memory technology, the amount of processing and the amount of memory available per unit chip space both have gone up. And as a result we are finally able to see such systems coming into action.



Now, with such systems coming into action we can expect such systems being actually used in the field in 3-4 years from now and going by the past experiences of similar trials and advents of similar systems , such systems will indeed outperform human beings in the fields in which they will replace them. And if this turns out to be the case, we are going to face the biggest problem that we have faced till date – massive scale unemployment. Managers, who are always hungry to get more efficiency and more effectiveness without much of a demand, are going to be the first ones, who will prefer such systems over human beings. They will get what they always wanted to get and they will stay happy till the day when they themselves will be replaced by such systems on the orders of the still higher level managers. The whole hierarchy of work flow will then comprise of AI systems. This may seem to be a distant reality but going by the predictions this may actually happen. The sales of the organizations will indeed go up. Companies will be getting profits as high as they had never expected to get but on the other hand the governments will be struggling to cope with the all time high unemployment figures. The nations that will be able to cope up with this surge by passing the appropriate regulations will be the ones that will eventually sustain and the ones who will fail to do so will be drowned into a state where the economy will be on its peak but the society will be on its all time low. The whole balance of such nations will be disrupted and the overall administration will become a total chaos. Planners will be clueless as they will be encountered with something which they had never ever faced before and the Leaders will be clueless as they would have no one to assist them in decision making. In short, the whole world may lead towards an irrecoverable disaster. As of know when we haven’t seen such systems yet, this all may seem to be a bit of a framing but then ask your Grandma how she felt when she saw the television for the first time.