Artificial Intelligence

 In Technologies

Definition

AI is the intelligence exhibited by machines or software and the branch of computer science that develops software and machines with intelligence. Although it is difficult to give an accepted definition of AI, researchers such as Hunter & Legg (2005) proposed a working definition of intelligence as ‘‘intelligence measures an agent’s ability to achieve goals in a wide range of environments” (p.60). This suggests the ability to adapt, learn and understand (Hunter & Legg, 2005). From the above definition of AI, one of the obvious defining features is intelligence. Intelligence is an important feature associated with technology as technologies are expected to think in a human like form and have human intelligence.

History of the technology

The underlying concepts of AI refer back many centuries into the Greek philosophy and mathematics. In the summer of 1956, AI was launched as a large scale scientific endeavour at the Dartmouth Conference. As a result, this was the first wave of enthusiasm for AI which undoubtly encouraged radical predictions such as the existence of machines with intelligence. During the 1950’s, there was also the emergence of new experiments with machines based on neural network concept. Marvin Minsky a pioneer in AI that was involved in the development of the first computer neural networks, their publication had led to a down fall showing their limitations. However, Japan had just released an ambitious program for the development of the fifth generation of computers that had strongly put emphasis on AI and advances in expert systems which gave hope that AI would be able to realise its ambitious goals. Shortly after, another golden age of AI had emerged consisting of radical predictions and renewed optimism that had unfortunately ended as it had become apparent that the predictions would not be realised and the existing AI systems had limitations. Although AI research during this time span did not adequately succeed due to their limitations, the field of AI is currently experiencing renewed interest. AI has shown to be quite broad and can range from a robotics perspective to an expert systems perspective accounting for neuroscience-orientated perspective. Across these broad fields, an obvious denominator is the creation of machines that follow human like intelligence.

The field of AI can be separated either into ‘‘strong AI’’ or ‘‘applied AI’’.  The purpose of ‘‘strong AI’’ is to develop machine intelligence equal or superior to human intelligence. However ‘‘applied AI, sometimes referred to as weak AI’’ puts emphasis on the applications of machine intelligence for detailed tasks and services. The use of ‘‘applied AI’’ is commonly used in fields such as mobility, e-business and security technologies which could have a strong effect on our culture and society. In our society today, AI applications that have proven to be successful range from consumer electronics, mass produced software and custom-built expert systems.

Timeline

The timeline for AI has been predicted by researchers. The Korean Vision 2025 describes that computer systems that displays complex intelligence are expected to brought into practice within the next 15 years. Moreover, within the next 30 years a ‘‘singularity’’ has been predicted (Kurzweil, 2005). The term singularity implies how humans will be able to upload their entire minds to computers and become digitally immortal. (Kurzweil, 2012 as cited in Prigg, 2014). The new era of singularity will enable our intelligence to become increasingly non-biological whilst becoming incredibly powerful. Hence, the evolution of singularity will allow us to transcend our biological limitations and amplify our creativity. During this phase of singularity, there will not be a clear distinction between human and machine, real and reality and virtual reality (Kurzweil, 2005).

There has also been a rapid growth in the field of neuroscience which has been associated with new and advanced brain imaging techniques to visualise processes in the human brain. This has further been associated with certain types of human thinking which is expected to provide a foundation for advances in AI.

Additionally, thanks to the continued validity of ‘‘Moore’s Law’’ it has been predicted that computers will be created that will have ever increasing size and power. With nanotechnology it gives the scope for continuation in the future when silicon technology reaches its limits. Kurzweil (2001) predicts that human brain capacity is 100 billion neurons, with 1000 connections per neuron, performing 200 calculations per second. Therefore, it may be possible that computers processing the capacity of one human brain for $1000 dollars could be accessible by the year 2023. Nevertheless, the price should decrease by a single cent by the year 2037. Furthermore, Kurzweil (2001) predicts that by the year 2049 a computer with human brain capacity of human race for $1000 would be accessible, with the price decreasing to a single cent.

Application areas/Examples of Artificial Intelligence (AI)

Application of AI includes software agents, artificial brains, artificial intelligence chips, control system for robots and expert systems.

Software agents:

Software agents have regarded to be a particular application for AI research which dates back to the late 1980’s. At present, software agents are recognised as programmes that can work independently (autonomous), whilst being able to react to changes in their environment (reactive) and able to communicate with different software agents. The use of simulations in science and computer games is regarded to be important software agents today. From this, the development and the use of highly specialised agents are put into practice. Furthermore, software agents may effectively contribute in ambient intelligent environments to explore for information, to asses them and draw conclusions that incorporate adaptive decision making.

Artificial brains:

The application area of artificial brains has been linked to AI research. As a result, human like functions will emerge from artificial brains. Higher functions of the brain are regarded to be properties of its neurointeractivity among neurons, collections of neurons, the brain and its environment.  In this context, artificial people will resemble humans providing that their natural intelligence will develop within the human environment over a long period of time through close relationships with humans. Taking this into consideration, artificial people will need social systems to increase their ethics and aesthetics.

Artificial Intelligence Chips:

Artificial intelligence chips expected by the year 2025 should allow computers to understand human feelings and could possibly use electro-magnetic information to read information from the brain (Korea 2000). Nevertheless, AI systems are also used to control robots, especially in environments where there is human and other life form existence.

Control system for robots:

Pires (2007) described that robot control systems are electronic programmable systems that is liable for controlling and moving the robot manipulator whist being able to operate with the environment and advanced users. During an unexpected circumstance for example when an error persists, the use of control robots can be operated. A control system application will allow the functionality of the robots.

Expert Systems:

There are different forms of expert systems which are part of AI research. Expert systems can be applied for medical or business purposes alongside other areas of work. Engelmore & Feigenbaum (1993) acknowledged that “AI programs that achieve expert-level competence in solving problems in task areas by bringing to bear a body of knowledge about specific tasks are called knowledge-based or expert systems. Often, the term expert systems is reserved for programs whose knowledge base contains the knowledge used by human experts, in contrast to knowledge gathered from textbooks or non-experts. Taken together, they represent the most widespread type of AI application”.  Moreover, Malone (1993) explained further how expert systems can be applied to areas of professional work and such examples include accounting especially for ‘‘tax, auditing managerial decision making, personal selection, financial modelling, decision making and accounting education and training purposes’’(p.1).  Additionally, Mauno & Crina (2008) further explained how the application of medical expert systems are made useful  to help clinicians during laboratory analysis, diagnostic purposes, treatment protocol, teaching of medical students and residents.

Controversies/criticism

According to the European Technology Assessment Group (2006) ‘‘higher cognitive processes such as decision taking, learning and action still pose major challenges’’. Although there is some advancement in cognitive systems and models there are still major barriers that need to be addressed before an artificial system will be created which approaches the cognitive capacities of humans(European Technology Assessment Group, 2006).

Taking this into careful consideration, the traditional attempts at modeling AI that focuses upon the capabilities of digital computers to manipulate symbols are not enough to achieve anything similar to true intelligence. One of the reasons being is that symbolic AI systems are developed to design and programmed and not so much to be evolved or trained. Therefore, it is limited in its use as symbolic AI systems operate under rules and as a consequence they are regarded to be fragile and ineffective outside of their assigned domain (Arnall, 2003).

Specific ethical issues raised by the technology

There are a variety of ethical issues which have been acknowledged in relation to Artificial Intelligence (AI) that is discussed below.

Autonomy and rights:

Themajority of ethical issues associated with AI revolves around software and robot autonomy. Taken this into consideration, some commentators doubt whether individuals actually want to cede control over our affairs to an artificial intelligent piece of software that may have its own legal rights. Despite the fact that some autonomy has proven to be placed at an advantage, absolute autonomy is regarded to be placed at a disadvantage. As a result, this seems to clarify the fact that legal systems are not ready for high autonomy systems, even in situations that are simple to visualise such as possession of personal information. On the other hand however, Arnall (2003) explains that in the long term where it is possible to visualise advanced applications of hard AI, serious issues start to develop that are associated with robot take-over, machine rights and military conflict.

Robots overtaking humankind:

Many AIresearchers have swayed towards the notion to create an artificial intelligence superior to that of human beings. An example of this was further highlighted by Hans Moravec called a famous book ‘‘Mind Children’’ argues that AI would inherit the positions of humans as intelligence will dominate the world (Moravec, 1998). However, Ray Kurzweil (2005) describes how the idea of super intelligence is also at the root of singularity. Technology which goes beyond capacity and processing power of human brain should technically be practical near the future (Kurzweil, 2005). Nevertheless, different approaches are looking to improve research in AI and robotics.

Moral agency:

Considerable amount of debate concerning AI is to what extent machines can be regarded as moral agents. Himma (2009) defined moral agents as a class of beings whose behaviour is subject to moral requirements that are then held responsible for their behaviour. Wallach (2008) describes that one benefit of moral agency of machines is that there is growth for and contexts within which these systems can operate securely. Wallach (2008) argues that, using their capacity to assess and respond to moral challenges, systems can be trusted in making the correct decisions when faced with ethical issues. Therefore harmful behaviours can be prevented without human interference. Technologies like AI can be applied widely without distressing about unnecessary consequences.

Machine ethics:

Anderson (2007) put forward that a way of ensuring ethical acceptable behaviour of intelligent machines toward humans is by integrating machine ethics into its system. According to Moor (2006) the possibilities for machine ethics should be acknowledged because “Computer scientists and engineers must examine the possibilities for machine ethics because, knowingly or not, they’ve already engaged—or will soon engage—in some form of it” (p.18).

Additionally Moor (2006) acknowledged that two courses of action can be followed in creating such a system such as implicit agent and explicit agent. Implicit agent refers to being programme to behave ethically without an explicit representation of ethical principles. Its behaviours is restricted only by the individual who himself is guided by ethical principles. Nagenborg (2007) claims that the agent itself has no influence on the rules. Whereas, explicit ethical agent is able to calculate an appropriate action in ethical dilemmas using ethical principles. Nevertheless, Wallach (2005) argues that even though implicit and explicit agency have been put forward, a feasible system able to deal with dynamic contexts and different cultures has yet to be put forward.

Moral agency and cultural issues:

Artificial  moral agents have been regarded to have an impact on other cultures they operate in. In case norms and values are embedded (consciously) into the ethical subroutines of an (implicit) artificial agent, questions arise regarding how artificial agents will be guided. Nagenborg (2007) argues that it might be only Western or Asian values or it could be universal values to adhere to. If the agent makes moral choices, it may be unacceptable in different cultures that hold different values. In contrast, an explicit moral agent could address this issue. By using its learning abilities, it will allow to adjust to the different local cultural backgrounds of its users (Nagenborg, 2007).

Gender biases:

In relation to cultural biases in the design and functioning of AI are gender biases. Adam (2001) discussed that masculine, rationalistic, individualistic norms governed by traditional epistemology have been carried over wholesale design of AI systems. Adam (2001) argues further that these epistemic notions are associated with ethics further demonstrating inequalities between sexes without being able to explain them. On the other hand however, feminist ethics may offer a different explanation to deal with inequalities rising from the masculine, individualist ethics embodied in AI systems.

Responsibility gap:

Using AI machines are developed with learning capacities, autonomy and agency to perform dynamic and complex tasks that previously only could be done by humans.Therefore, the manufacturer, designer or the operator of a machine in principle are not able to see the outcome for future behaviour of the machine and thus cannot be held ethically responsible or liable for it (Matthias, 2004). Moreover, Gill (2008) argued that machines cannot be held liable for what it has no control over. As a result, when technology learns ad behaves in an impulsive manner, users, developers and designers of the system are not able to exactly predict to the system’s behaviour and cannot be held solely responsible for. Therefore, this creates “a responsibility gap for traditional responsibility ascription and consequently, an ethical gap.” (Gill, 2008; Matthias, 2004).

Autonomy:

The issue of autonomy has been questioned by many researchers concerning about the level of autonomy that is needed for artificial agents to qualify as moral agents. Floridi and Sanders (2004) put forward that high level of autonomy is needed. Adam (2005) argues that a database containing personal information cannot be eligible for the attribution of artificial agency. However, in a network approach suggested by Latour (1992, as cited in Bijker and J. Law, 1997), there is no need to have a high level of autonomy which needs to be distributed to artificial agents. Moreover, Adam (2005) argues that morality which is distributed throughout the network; moral activity can be achieved without a high degree of agency.

Shifting of responsibility:

Mowshowitz (2007)acknowledged thatone of the down falls of endowing technology with autonomy is that humans are ethically side-lined, relieving them of responsibility. “The suggestion of being in grip of irresistible forces provides an excuse for rejecting responsibility for oneself and others, thus creating conditions for inappropriate or antisocial behaviour.”(Mowshowitz, 2007, p.271).Mowshowitz (2007) claims further that Individuals fail to take responsibility and then blame advanced AI systems for what is essentially human failure, particularly when these systems display forms of agency. Additionally, blaming technology obscures the social relations of technology which hinders the understanding of the issues resulting in the inability to deal with them (Mowshowitz, 2007).

Human Replacement:

Nilson (2005) claimed that AI would imply that majority of tasks performed by humans can be mechanically performed. This then raises the question of whether intelligent machines will replace humans? The thought of this happening creates a distressing feeling of losing jobs as it will be replaced by machines and ridding humans of a purposeful life. Duffy (2008) described further that the fear of losing jobs dates back to the industrial revolution for example, silk weavers feared unemployment. Jobs that were performed manually and the fear linked with replacement of technology has been an ongoing issue ever since. Up until now humans have not been replaced by machines but have been placed into various roles in society.

Privacy:

Carew (2008) argues that privacy issues will be worsened by modern applications of AI through a variety of situations including e-learning and the internet, intelligent agents profile individuals. However, Rosenberg (2008) argues that for data mining purposes intelligent systems can be used for example automatic interpretations of recordings and cross-correlation of electronic files. Having powerful mechanisms made available could potentially increase real and anticipated assault on individual privacy. Another major problem with privacy is the lack of informed consent (Carew, 2008). As individuals can’t be informed beforehand what AI systems may reveal about them this leads to lack of informed consent hence, these individuals are unable to proactively protect their privacy.

Surveillance:

Steadily surveillance is becoming a fundamental part of societies in the developed world. Nagenborg (2007) acknowledges that in some situations artificial agents embody specialized robots that are an integral part of surveillance infrastructures that is increasing the capacities to observe individuals. Apart from privacy, the free movement of people and information is in risk by the digital borders guarded by artificial agents (Nagenborg, 2007).

Attribution:

This document was developed by Vanita Patel on the basis of research undertaken in the ETICA project,www.eticaproject.eu.

References

Academic publications:

Adam, A. (2001) Computer ethics in a different voice, In: Information and Organization, Volume 11, Issue4, October 2001, Pages 235-261, ISSN 1471-7727, DOI: 10.1016/S1471-7727(01)00006-9.

Adam, A. (2005). Delegating and Distributing Morality: Can We Inscribe Privacy Protection in a Machine?. In: Ethics and Inf. Technol. 7, 4 (Dec. 2005), 233-242. DOI= http://dx.doi.org/10.1007/s10676-006-0013-3

Anderson, M.,  Anderson, S. L. (2007). Machine ethics: Creating an ethical intelligent agent. In: AI Magazine, 28(4), 15-26.

Carew, P. J., Stapleton, L., and Byrne, G. J. (2008). Implications of an ethic of privacy for human-centred systems engineering. In: AI Soc. 22, 3 (Jan. 2008), 385-403. DOI= http://dx.doi.org/10.1007/s00146-007-0149-7

Duffy, B.R. (2008). Fundamental Issues in Affective Intelligent Social Machines, The Open Artificial Intelligence Journal, pp.21-34 (14), ISSN: 1874-0618 Volume 2, 2008.

Floridi, L. and Sanders, J. W. (2004). On the Morality of Artificial Agents. In: Minds Mach. 14, 3(Aug.2004), 349-379. DOI=http://dx.doi.org/10.1023/B:MIND.0000035461.63578.

Gill, S. P. (2008). Socio-ethics of interaction with intelligent interactive technologies. AI Soc. 22, 3 (Jan. 2008), 283-300. DOI= http://dx.doi.org/10.1007/s00146-007-0145-y

Himma, K. E. (2009). Artificial agency, consciousness, and the criteria for moral agency: what properties must an artificial agent have to be a moral agent?. In: Ethics and Inf. Technol. 11, 1 (Mar. 2009), 19-29. DOI= http://dx.doi.org/10.1007/s10676-008-9167-5.

Hutter, M., Legg, S. (2005): A universal measure of intelligence for artificial agents. Proceedings of the 19th International Joint Conference on Artificial Intelligence (IJCAI), Edinburgh, July 30 to August 5, 2005. San Francisco: Morgan Kaufmann publishers. Pp. 1509-1510.

Kurzweil, R. (2005). The Singularity is Near. When humans transcend biology. New York: Viking.

Latour,B. (1992) Where are the Missing Masses? The Sociology of a Few Mundane Artifacts. In W.E. Bijker and J. Law, editors, Shaping Technology/Building Society: Studies in Sociotechnical Change, pages 225-258, MIT Press, Cambridge, MA and London, 1997.

Malone,D.(1993). Expert systems, artificial intelligence, and accounting. Journal of Education for Business, 08832323, Mar/Apr93, Vol. 68, Issue 4.

Matthias, A. (2004). The responsibility gap: Ascribing responsibility for the actions of learning automata. In: Ethics and Inf. Technol. 6, 3 (Sep. 2004), 175-183. DOI= http://dx.doi.org/10.1007/s10676-004-3422-1

Mauno, V. & Crina, S. (2008) Medical Expert Systems. In Current Bioinformatics. Vol. 3, No. 1, January 2014 , pp. 56-65(10). Bentham Science Publishers

McCulloch, W. and Pitts, W. (1943). A logical calculus of the ideas immanent in nervous activity. Bulletin of Mathematical Biophysics, 7:115 – 133.

Moor,  J.H. (2006) The nature, importance, and difficulty of machine ethics. IEEE Intelligent Sys 21(4), pp. 18–21

Moravec, H. (1988). Mind Children. Cambridge Mass.: Harvard University Press.

Mowshowitz, A. (2008). Technology as excuse for questionable ethics. AI Soc. 22, 3 (Jan. 2008), 271-282. DOI= http://dx.doi.org/10.1007/s00146-007-0147-9.

Nagenborg, M. (2007). Artificial moral agents: an intercultural perspective.International Review of Information Ethics (7), 129-134.

Nilsson, N.J.(2005) Human-Level Artificial Intelligence? Be Serious! -, American Association for Artificial Intelligence.AI Magazine, December 22, 2005.

Pires, N. J. (2007). Industrial Robots Programming: Building Applications for the Factories of the Future.

Rosenberg, R. S. (2008). The social impact of intelligent artefacts. AI Soc. 22, 3 (Jan. 2008), 367-383.DOI= http://dx.doi.org/10.1007/s00146-007-0148-8.

Wallach, W., Allen, C.(2005)., Android Ethics: Bottom-up and Top-down Approaches for Modeling Human Moral Faculties, Proceedings of the 2005 COGSCI workshop: Toward Social Mechanics of Android Science, pp. 149-159.

Wallach, W. (2008). Implementing moral decision making faculties in computers and   robots. AI Soc. 22, 4 (Mar. 2008), 463-475. DOI= http://dx.doi.org/10.1007/s00146-007-0093-6.

Governmental/regulatory sources:

European Technology Assessment Group. (2006). Technology Assessment on ConvergingTechnologies. Retrieved January 22, 2014, from http://www.europarl.europa.eu/stoa/publications/studies/stoa183_en.pdf

Korean Government. (2000). Vision 2025 Taskforce – Korea’s long term plan for science and technology development. Retrieved on January 23, 2014, from,http://www.inovasyon.org/pdf/Korea.Vision2025.p

Newspaper sources:

Prigg, M. (2014). Daily Mail.Retrieved January 30, 2014 from http://www.dailymail.co.uk/sciencetech/article-2548355/Google-sets-artificial-intelligence-ethics-board-curb-rise-robots.html

Web Sites/Other resources:

Arnall, A. H. (2003). Future Technologies, Today’s Choices. Nanotechnology, Artificial Intelligence and Robotics; A technical, political and institutional map of emerging technologies. Retrieved January 23, 2013 from,http://www.greenpeace.org.uk/MultimediaFiles/Live/FullReport/5886.pdf

Engelmore, R. S. & Feigenbaum, E. (1993) Expert Systems and Artificial Intelligence. In Knowledge-based systems in Japan. Japanese Technology Evaluation Centre.Retrieved on 23 January, 2014, from,http://www.wtec.org/loyola/kb/c1_s1.htm

Kurzweil, R. (2001): The law of accelarating returns. http://www.kurzweilai.net/articles/art0134.html?printable=1

Kurzweil, R (2005). Singularity. Retrieved on 30 January, 2014 fromhttp://www.singularity.com/aboutthebook.html

 

 

 

Recent Posts

Leave a Comment

Contact Us

Please use the form below to send us an e-mail, we will respond to all e-mails as soon as possible.

0

Start typing and press Enter to search