The Future of Computers - Artificial Intelligence


What is Artificial Intelligence?


The term “Artificial Intelligence” was coined in 1956 by John McCarthy at the Massachusetts Institute of Technology defining it as the science and engineering of making intelligent machines.

Nowadays it’s a branch of computer science that aims to make computers behave like humans and this field of research is defined as the study and design of intelligent agents where an intelligent agent is a system that perceives its environment and takes actions that maximize its chances of success.

This new science was founded on the claim that a central property of humans, intelligence—the sapience of Homo Sapiens—can be so precisely described that it can be simulated by a machine. This raises philosophical issues about the nature of the mind and the ethics of creating artificial beings, issues which have been addressed by myth, fiction and philosophy since antiquity.

Artificial Intelligence includes programming computers to make decisions in real life situations (e.g. some of these “expert systems” help physicians in the diagnosis of diseases based on symptoms), programming computers to understand human languages (natural language), programming computers to play games such as chess and checkers (games playing), programming computers to hear, see and react to other sensory stimuli(robotics) and designing systems that mimic human intelligence by attempting to reproduce the types of physical connections between neurons in the human brain (neural networks).



History of Artificial Intelligence


The Greek myth of Pygmalion is the story of a statue brought to life for the love of her sculptor. The Greek god Hephaestus' robot Talos guarded Crete from attackers, running the circumference of the island 3 times a day. The Greek Oracle at Delphi was history's first chatbot and expert system.


In the 3rd century BC, Chinese engineer Mo Ti created mechanical birds, dragons, and warriors. Technology was being used to transform myth into reality.

Much later, the Royal courts of Enlightenment-age Europe were endlessly amused by mechanical ducks and humanoid figures, crafted by clockmakers. It has long been possible to make machines that looked and moved in human-like ways - machines that could spook and awe the audience - but creating a model of the mind was off limits.



However, writers and artists were not bound by the limits of science in exploring extra-human intelligence, and the Jewish myth of the Golem, Mary Shelley's Frankenstein, all the way through to Forbidden Planet's Robbie the Robot and 2001's HAL9000, gave us new - and troubling - versions of the manufactured humanoid.


In the 1600s, Engineering and Philosophy began a slow merger which continues today and from that union the first mechanical calculator is born at a time when the world's philosophers were seeking to encode the laws of human thought into complex, logical systems.

The mathematician, Blaise Pascal, created a mechanical calculator in 1642 (to enable gambling predictions). Another mathematician, Gottfried Wilhelm von Leibniz, improved Pascal's machine and made his own contribution to the philosophy of reasoning by proposing a calculus of thought.

Many of the leading thinkers of the 18th and 19th century were convinced that a formal reasoning system, based on a kind of mathematics, could encode all human thought and be used to solve every sort of problem. Thomas Jefferson, for example, was sure that such a system existed, and only needed to be discovered. The idea still has currency - the history of recent artificial intelligence is replete with stories of systems that seek to "axiomatize" logic inside computers.

From 1800 on, the philosophy of reason picked up speed. George Boole proposed a system of "laws of thought," Boolean Logic, which uses "AND" and "OR" and "NOT" mto establish how ideas and objects relate to each other. Nowadays most Internet search engines use Boolean logic in their searches.

Recent history and the future of Artificial Intelligence


In the early part of the 20th century, multidisciplinary interests began to converge and engineers began to view brain synapses as mechanistic constructs. A new word, cybernetics, i.e., the study of communication and control in biological and mechanical systems, became part of our colloquial language. Claude Shannon pioneered a theory of information, explaining how information was created and how it might be encoded and compressed.

Enter the computer. Modern artificial intelligence (albeit not so named until later) was born in the first half of the 20th century, when the electronic computer came into being. The computer's memory was a purely symbolic landscape, and the perfect place to bring together the philosophy and the engineering of the last 2000 years. The pioneer of this synthesis was the British logician and computer scientist Alan Turing.

AI research is highly technical and specialized, deeply divided into subfields that often fail to communicate with each other. Subfields have grown up around particular institutions, the work of individual researchers, the solution of specific problems, longstanding differences of opinion about how AI should be done and the application of widely differing tools. The central problems of AI include such traits as reasoning, knowledge, planning, learning, communication, perception and the ability to move and manipulate objects.


Natural-language processing would allow ordinary people who don’t have any knowledge of programming languages to interact with computers. So what does the future of computer technology look like after these developments? Through nanotechnology, computing devices are becoming progressively smaller and more powerful. Everyday devices with embedded technology and connectivity are becoming a reality. Nanotechnology has led to the creation of increasingly smaller and faster computers that can be embedded into small devices.

This has led to the idea of pervasive computing which aims to integrate software and hardware into all man made and some natural products. It is predicted that almost any items such as clothing, tools, appliances, cars, homes, coffee mugs and the human body will be imbedded with chips that will connect the device to an infinite network of other devices.


Hence, in the future network technologies will be combined with wireless computing, voice recognition, Internet capability and artificial intelligence with an aim to create an environment where the connectivity of devices is embedded in such a way that the connectivity is not inconvenient or outwardly visible and is always available. In this way, computer technology will saturate almost every facet of our life. What seems like virtual reality at the moment will become the human reality in the future of computer technology.

3 comments:

Unknown said...

Writing computer programs is extremely intriguing and innovative thing in the event that you do it with affection. Your blog code causes a great deal to novices to take in programming from essential to propel level. I truly cherish this blog since I take in a great deal from here and this procedure is as yet proceeding.For More Artificial intelligence

Apk Techonology said...

Writing computer programs is extremely intriguing and innovative thing in the event that you do it with affection. Your blog code causes a great deal to novices to take in programming from essential to propel level. I truly cherish this blog since I take in a great deal from here and this procedure is as y
https://apknety.com/

Apk Techonology said...

this website is best for mod apks if you want to enjoy it for free https://trollapk.com/