Intelligent systems and life styles
Development of AI/robots and communication tools are drastically changing our lifestyle. We shop on smartphones to wait for delivery, instead of going out to shopping malls or real markets unless real-world examinations are necessary. We do not specify meeting places in detail; on-site messaging substitutes. Young generations do not understand why we used to miss each other; they have no ideas on situations without mobile communications. Robots and automated driving systems are to change our physical mobility. Automated houses with built-in robots may be realized. Our everyday life styles are under co-evolution with technology.
Our mindset, however, may change more slowly. Some people still believe that hand-written letters are more sophisticated and polite in the age of internet. Some other believe domestic economy should be back again in the age of global economy. Some other still promote gender roles especially in housekeeping. Japan Society of Artificial intelligence (JSAI), for example, was criticized of an illustration of a housemaid-looking robot on the cover of its official journal as it reinforces gender bias in 2013. It is a snapshot of decision making process in most academic associations in Japan, which is male dominated. Research and development of ICT reflect such decisions to form the future with old value systems being built in.
Ethical value systems around ICT are needed not only human beings but also systems themselves. Microsoft Tay turned out to produce racist tweets only after hours, although it was trained under stress-tests before published online (Lee, 2016). It should have included restraints or ethical codes to which inputs in the real world do not affect.
It is unclear how we can establish such ethical value systems. Moor (1985) coins “vacuum of policy” to claim that coherent policies need a coherent conceptual system. Rapidly growing technology may develop faster than concept formation of human beings. Guidelines are a presentation of a conceptual system: information education for human beings should include the social role of guidelines as the method of explicitly stating and sharing moral assumptions in a community.
A smooth interface between human and machine must be developed not only on the side of machine but also on the side of human beings. Guidelines of education and requirements to develop systems are needed, both technological and social, for sound life with AI/robots. It is because a guideline prepare a method of sharing the way of thinking for emergency or other opportunities with insufficient time of consideration, when each agents may conclude and behave differently according to their own value system.
Case Studies in the World of Games: Excessive trust on AI Surpassing Human Beings
In this section, we will overview recent cases from the world of games to illustrate vacuum of policies.
AI has changed our lifestyle by controlling social infrastructures and other systems as well as ways of thought on scientific methodology. The first boom of AI came by 1960s, following physical realization of the idea of computation. Turing discussed machines which can play chess and other games. He cast the question “Can machine think?” Philosophers and logicians amplified it. What is thought? Is it possible to create a “machine brain”? How we can determine whether an opponent of conversation is human? Can we invent “artificial intelligence?” Machine translation was in the main focus of the research of artificial intelligence. It failed, however, because both theories of languages and computing machines were immature.
The second boom of AI in 1980s came with so-called expert systems. Machines mimic human experts in a specific areas. Recorded knowledge and physical control skills in terms of programs realizes “machine experts.” On the line of thought, Deep Blue by IBM defeated the chess champion Kasparov in 1997. It was the main achievement of the second boom of AI.
Excessive expectations on expert systems ended the boom, however. In Japan, a research program of legal expert systems played the flagship role of the second boom of AI. It proposed legal reasoning by machines of deductive reasoning. There were too many pitfalls; exceptions and default reasoning among others. The learning was that not everything can be deductive. Social ideas such as fairness do not fit that simple logic.
The third boom of AI since 2010 is essentially different from the former booms. Inductive reasoning in the form of statistical simulation methods led the third trend of AI. The methodological core of AI has extended to involve combinations of deductive and inductive reasoning to drastically boil down the number of possibilities to be computed in addition to development of high-speed computing and decrease of storage costs.
Statistical methods with forcefully accelerated machine power eventually realize the dream of the first boom of AI, game-playing machines. The year 2016 was revolutionary for Go community as well as AI community. AlphaGo/ Master beats sixty top professional players of the world in stake. It was almost three decades passed after Deep Blue’s victory. In January 2016, most AI researchers predicted that it would take ten years before machine win human beings in Go. In March 2016, however, Lee Sedol, the Korean hero in Go, lost the five-game match against Alpha Go. The result shocked not only Korean society and AI community but also Go community (Minpyo & Jinho, 2016). The moves of Alpha Go are not understandable via existing Go theories, which human players have built for a long time.
Now human players examine machine games to find out any explanatory theory. Of course, we know that the system utilizes deep learning techniques combined with a Monte Carlo method of probability simulation. It does not mean that we know any good reason for each move proposed by computer, however. It is exactly an example of vacuum of conceptual framework, which in turn leads to vacuum of policy.
The area of computer games cast lights against social consequences of unintelligible computers which beat human beings. Shogi, or Japanese chess, is a variation of Chess allowing players to re-use captured pieces. Computational complexity increased by the re-use delayed development of shogi programs stronger than human beings by years.
In 1997, when Kasparov lost a game against computer, shogi programs remained the level of standard amateur players. By 2005 the computer program “Bonanza” reached the level of strong amateur players and showed possibilities to fight neck and neck against a professional player. Japan Shogi Association restricted professional players to play against computers without permission, claiming need of reasonable rules for fair play between human beings and computer programs. Computer shogi programs grew to the professional level by 2007 and lost a game against a top-level professional player, Watanabe Akira. By 2008 amateur shogi players were no match for computers. The source code of Bonanza went open for further development in 2009. Information Processing Society of Japan (IPSJ) began supporting games between human professional shogi players and computer programs in 2010. IPSJ declared completion of the research project of computer shogi in October 2015.
Interviews of professional shogi players (Okawa, 2016) reveal how AI changed their life as game players. Some suggest that acceptance of the new technology depend on not just functions of AI systems but also operations and practices involving both human beings and AI. There are top-level professional players who did not join in competitions of human and AI players because they are already too busy to prepare for games against human players. Some players claim unfairness of usage of AI led in the world of human shogi, as only those who join a specific tournament are allowed to examine AI programs so that they research more possible games than those who have no access to those systems. Those comments forecast social reactions against AI systems which are superior to human beings in the specific realm. Need of fair rules of operations is a condition for acceptance. The key issue is fairness.
The shogi episodes above suggest various aspects of our perplexed future with AI. Human beings need a theory to make the world intelligible for human. Any theories used in computers are not enough for human beings. The problem is an extension of status of formal logic in human understanding. We human beings do not just rely on reasoning, deductive or inductive. We actually behave on chemical reactions without realizing what is going on behind our judgment. Alan Turing rightly pointed out that there are various ways of thinking and understanding even among human beings. We are not restricted to assume that computers think like human beings. His words come true.
Another insight from the shogi episodes is social consequences of technological enhancement of human abilities. A source of unfairness arises from the fact that those shogi players with access to AI have not just more training opportunities but also a wider range of possible considerations than those without access. Human players have learnt to play according to heuristic rules, which have been built from observations and analysis of limited number of human play records in a limited depth of simulation when comparing possible options. Professional players often construct quite a wide range of future possible variations with moves to hundred turns, but the number of possibilities for human beings to deal with within a time allowance is incredibly smaller than that for AI. No wonder AI brings better options with far wider range of possibilities with deeper analysis. Human beings with AI assistance accept without going through the whole process of analysis by AI — which is impossible for mortal human beings with a limited speed of thought — then beat human players without AI assistance. It is not fair for human-to-human match.
The fear against AI has already caused a social trouble. In December 2016, a professional player was accused by other professional players, who insisted that his leaves during official games are not for biological breaks but for cheating on smartphone. The accused player was deprived right to play some official games. It turned out that the accusation has no evidence in January 2017. The president of Shogi Association resigned.
We need to investigate in detail social backgrounds before determining causes of different attitudes toward AI between go and shogi. In both games AI has surpassed human beings; strategies of AI are not yet intelligible for humans; human players try to catch up “lines of thought” of AI to find out explanatory theories of games.
Technological trust based on society
Innovative technology has set perplexing situations parallel the case of Alpha Go. Human beings at first do not understand why the system choose the specific move at a stage of a game. It maximizes the probability of winning the game, with calculation based on machine learning through thirty millions of games. Machines do not sleep nor feel fatigue while they keep producing new patterns of games.
AI cannot become any omnipotent and omniscient god, however. It cannot be deterministic. It cannot be predictable. It just maximizes the probabilities according to a given goal at each calculation.
How then have we established trust on technology in general? The modern science consists of observational data and analyzing framework of experiment as Kant generalizes the schema to human cognition and thought: “Thoughts without content are empty, intuitions without concepts are blind (Emmanuel Kant, Critique of Pure Reason, B75).”
In the first half of the twentieth century, mathematical descriptions combined with logical methods and physical implementation onto machines. The theory is based on deductive reasoning with mathematical description in form of formal languages. Alan Turing and others realized modern computers based on theory of computation during the World War II. They physically implemented the idea of computing machine whose program is written on memory as well as data.
Such logical machines prepare computational methods in science. Machines controlled by computers with sensor data and fast calculation optimize the quality of results. The computing technology brings methodological issues in science: the status of the simulation. According to given data sets and parameters, a simulation system runs to produce new data sets. The data are not observational in a strict sense but rely on the simulation model with parameters. It blurs the traditional dichotomy of theory and observation in scientific methods.
Thus, it seems that AI may change our understanding of scientific methods; that it is getting closer to science in the sense that it essentially depends on inductive reasoning which may involve errors, which cannot be completely eliminated. People are going to trust output of computation as an absolute truth as they accept results of science.
Tatsumi et al. (2016) proposes the following four levels of trust on computing systems:
(1) To accept any results/ outputs of computing.
(2) To accept results/ outputs of computing which created by trustworthy person (individuals or organizations).
(3) To accept results/ outputs of computing if the process of computing is understandable.
(4) To accept no results/ outputs of computing.
In (1) and (2), computers may be totally black box. In (1), even the outputs are based on wrong data or programs, they are blindly accepted. In (2), previous knowledge of social backgrounds of person is involved in the act of trust. Most lay person, the article argues, may go to the level (2). On the other hand, people who learn programming/ computing enough to practice the skills may take the attitudes in (3). (4), or no acceptance, is impossible in the age of ICT. Technology is inseparable from our society. Even a person reject a single output, others and the social system itself may consider all the related results reliable.
We need more subtle arguments, however, with consideration of social factors in trust in issues concerning science and technology (Secko, Amend, & Friday, 2013), as ICT is just a part of the context. There are several models of science communications:
(a) deficit model
(b) contextual model
(c) lay expertise model, and
(d) public engagement model
Researchers and engineers are inclined to take (a), thinking that lay people should put social values on their research and development if they understand science and technology. It does not work, however, as public acceptance does not just consist of understanding and knowledge among the society.
Contextual model (b) focuses on stakeholders. Stakeholders in different contexts may get different set of knowledge from experts but again via one-directional information transmission. Lay expertise model (c) rather tries to compromise expert knowledge and “local knowledge,” accepting limitation of universal knowledge while it also aims to integrate local knowledge to update expert knowledge with the idea of progress toward “absolute science”. Public engagement model (d) focuses on negotiation process toward social agreement on scientific issues, as science is to be embedded into the society. Social demands determine directions of scientific research and technological development.
In fact, (3) in Tatsumi seems related to the deficit model (a) in science communication. Education in the deficit model may be effective from the current eyes, but the society with rapidly emerging technology requires rather public engagement model (d) so that the goals of education should include nurturing the attitude in students of committing themselves in research and development. It is the way toward establishment of trust of technology. In short, trust on system is based on intelligibility among computers and human beings.
In particular, any issues of social acceptance of emerging technology embrace the unknown unknowns, as the society and technology evolve together. So-called “zero-risk attitude” causes ignorance of such unknown unknowns, as there is no absolute right answer. We need to be educated to evaluate outputs of science and technology in social contexts.
Information education in Japan: the current situation
It is not just in the world of games. Failure of conceptual grasps of rapid social changes along with technological development leads to social confusion on “correct” usages of information systems. We add another illustration with a short historical survey of information education in Japan (Tatsumi, Murakami, & Otani, 2015).
The internet arrived at Japan in 1974, and the academic network (JUNET) was launched in 1985 (Hodkin, 2014). Users were limited to researchers or geeks, well informed of ICT. Information education has been put in the context of vocational education before internet. It then was related to intellectual property and information ethics along with development of “Pasokon Tsushin,” or grass-root networks.
Then came the popularization period. Needs of educational computing began considered by 1994 in primary and secondary education (Center for Educational Computing, 1998). National consortium launched two-phased projects to implement computers in one hundred schools per phase with governmental support from Ministry of Economy, Trade, and Industry. The internet era arrived at Japan around 1995, and those schools had access to internet by 1996.
Information educators have pointed out that education have not satisfied social demands. Information Processing Society has proposed curriculum standards of information education in undergraduate programs of computer science and related fields since 1989. The latest version, J07, was proposed as a localization of IEEE/ACM CC2001-CC2005 (Joint Task Force for Computing Curricula, 2005).
One of fundamental problems of the proposed curricula of information education in higher education is discrepancy from the national curricula for primary and secondary education. In Japanese high schools, there are general education programs (futsu-ka) and vocational programs (senmonka). As for ICT programs, however, most students intend to go to college after graduation, while their curriculum remains specialized training for ICT operators. Numazaki et al. (2016) points out incoherency between the high school curriculum and the undergraduate curriculum standard. Some educational contents were taught in both levels while some other topics in the college level demand prerequisite knowledge and skills which are insufficiently taught in high school.
The national curriculum for general education programs in high schools is also published, but often not observed. It is because of poor recruitment and training (Nakayama et al., 2015) of ICT instructors; some instructors do not teaching license in the subject of information education, as the government allowed instructors with teaching licenses of other subject to teach information education in class, due to rapid introduction of the subject in school. Center for Educational Computing (2009) pointed out that computer education in Japan remained operational skills with lack of scientific understanding of computing nor respect to social demands due to poor grasp of school instructors.
Worse, limited school budget does not allow introduce and renew up-to-date computers and infrastructure such as wi-fi connections as well as personnel for maintenance. Excessive network security due to zero-risk mind has led the situation that most schools do not offer school email accounts to students or guardians. Students are only informed of online dangers such as copyright infringement, scams, and personal information leaks in “information ethics courses” or other extracurricular occasions. Most schools do not allow students to bring smartphones to schools. Most families do not have computers for children in home. Young generation in Japan have little idea how to use computers for work; they do not use computers for homework; communications on student affairs in Japanese schools remain paper-based.
It is thus natural that PISA (2015) (OECD, 2016a, 2016b) suggests that students in Japan are not familiar enough to accommodate keyboard interface to computers. Toyofuku (2016) analyzes the data to conclude that they do not use computers in learning environments. The rates of using computers for homework in school or in home are lower than the OECD average. Their use of computers in school is not associated to group work or other activities but limited to “learn computers;” In home, their use is not downloading videos nor uploading their own contents. Instead, they use computers and internet to chat and non-collaborative games. In short, majority of young generations in Japan remain just content consumers. They do not consider themselves as content creators, or users to entertain online facilities to present themselves on the network.
The situation stems from vacuum of policies on ICT uses of school education. Teachers tend to rely on teaching methods where they themselves experienced when young. ICT is out of focus of most teachers. They do not know how to work with computers or ICT devices. Some propose to use ICT as teaching device to manage students. Such proposals often exclude students’ spontaneous activities by prohibiting classroom uses of ICT to divestiture their future learning skills with robots and AI.
Reflection on technological singularity:
Technological singularities are expected to bring vacuum of conceptual framework. Some forecasts that high-speed computing with a huge amount of storage will produce machines superior to human beings in any aspects of life. The notion of agency, action, free will, responsibility, and personality may be updated according to the social change. Nobody in the world has experienced any comparable change. Vacuum of policies will be inevitable. We, however, must prepare for the situation by creating a guideline to fill the vacuum.
Finally, we step forward to claim that adequate guidelines should implement the notion of fairness as well as the notions above, in a readable and intelligible form for both human beings and machines. Machines must “understand” the social concepts: they, like us human beings, should try not to realize digital divide among human beings, regardless genders, races, nationality, economic situations, disabilities, and other social situations.
Then should we allow machines to set goals by themselves? They can be rampant to make the best for machines. Some AI researchers even warn that machines may give poison to human beings with access to the decision process to maximize their own utility. We should always require machines to always let human beings join in the goal setting process.
It sounds like those who claim technological singularity: machines surpass human beings; machines control over human beings; uncontrollable machines annihilate human beings; and there are a full of fear against machines.
We now realize such technological singularity does not take the world over universally at one time, however. Instead, we already know that machines surpass human beings in games. Alpha Go brought us into the partial world of technological singularity in the area of games where winning conditions are clear and we can approximately construct an evaluation function to optimize the probability of winning. There is no frame problem around games. Technological singularity has already started from such simple parts of the real world.
Still, we should always require machines to always let human beings join in the goal setting process. The requirement is not just because machines are unpredictable, however. For machines and human beings to live together, we need the idea of fairness to realize in every part of the world. Current technology is yet unable to model the idea of fairness. Philosophical analysis and theory of ethics will play the main role in modelling such social notions. Social AI will not come to the real world without humanities.
Moreover, we should allow the room of “the unknown unknown” for phenomena in the world where human beings and machine live together. AI reveals a new aspect of the realm out of control by human beings. It opens up a new nature.
The proposal to take AI as nature cast insights against the technological singularity issues. People may claim: We should not stay in fear against machine. We, human beings in the modern age, make maximum endeavor to make most parts of the world intelligible, understandable, and predictable. We need to find out reasonable explanatory theories which can tell what is going on, tangible or intangible. Some might step forward to reject machines. Yes, we should not just feel fear of machine. We actually need human-understandable, human-intelligible theories to understand behaviors of AI.
Human beings need a theory to make the world intelligible for human, while any theories used in computers are not enough for human beings to understand. The perplexing situation appeared in the case of Alpha Go. Human beings at first do not understand why the system choose the specific move at a stage of a game. It maximizes the probability of winning the game, with calculation based on machine learning through thirty millions of games. We human beings rely on flesh. We need break and feeding, and actually behave on chemical reactions in our body without realizing what is going on behind our judgment. We do not just rely on reasoning, deductive or inductive.
Remember that Alan Turing rightly pointed out that there are various ways of thinking and understanding even among human beings. We are not restricted to assume that computers think like human beings. His words come true.
The notions of action, agency, free will, responsibility, goodness, and most importantly, the notion of fairness should be formulated in a readable and intelligible form for machines. Those notions should be coherent with our human-readable version, but the machine-readable version may have a totally different form as the internal structure of machines are not alike to us. To explicitly formulate such notions, conceptual analysis of the notions on the basis of philosophical theory on society will give boundary conditions of formulation. Vocabularies to characterize their logical features should be investigated in philosophical logic with examination and feedback to ethical theories to guarantee coherency with the human-readable version. We are still on the way of implementing the utilitarian notion of “better” and “best” on machines (Murakami, 2005), and the attempt of machine-readable “fairness” should refer to the experiences.
Such reformation of the basic of social philosophy may lead a drastic restructure of philosophical theories in the feedback process. We might find implicit, inherent incoherency in the existing philosophical theories of society. Even without such incoherency, philosophical theories need to be updated if the assumptions of our society are to be rewritten by technological advance. We would like to claim that the guideline should offer (1) a set of assumptions to be protected from update and (2) preference in choosing a new set of assumptions. Those assumptions to realize human rights and human dignity, for example, are to be protected.
Proposal: education in the world with AI
In the future education, both human beings and machines should learn the updated conceptual framework in the readable way. Both sides should behave under the same assumptions on our society. Thus AI and human beings need to find a way to live together. Social rules with mutual agreement in communities are the key to create a common ground. Current technology is yet unable to model the full idea of fairness and the other social notions, however. Philosophical analysis and theory of ethics will play the main role in modelling a full-fledged version. Social AI will not come to the real world without humanities.
Social demands with forecasts of the world with AI and robots should be reflected to any learning environment of schools (for human beings) and training facilities (for machines) as both human and machines should develop the world toward the future. “Resident machines” in schools or households may learn everyday lifestyles and social demands in the locality, if they could establish any sort of trust among human and machines through their behavior for an extended period. Results of such learning, however, may not be universal. The behavior is just optimized according to the very environment. Patch-working such “Local trust” can be the answer, we propose, instead of any single and universal ethical system installed as the absolute value system. It is parallel to decentralized trust in blockchain. It takes a while for machines to learn how to well behave in the world with human beings, like human growth also need a considerable time span. Machines may accelerate the computing process of a local ethical value system, but integration of ethical value systems will require much more computation as that of any system integrations.
We will then argue that information education for human beings should be complemented by philosophy and ethics. Human rights and human dignity are the key notions of modern society, while they are not fully covered in any subject during the whole course of educational curriculum, set by Japanese government. Without emphasis on citizenship education, the word “moral” is vaguely used to mean adaptation of existing social restrictions without critical attitudes. Such a misleading design of the national curriculum actually lead confusion in classroom and society, and indirectly causes incidents online. Moreover, lack of emphasis on human rights in education deprive students to understand the social role of guidelines and mutual agreement. Vacuum of conceptual framework in fact entails vacuum of policies.
This paper is partially supported by JSPS grant-in-aid for scientific research JP 15K01978.
Center for Educational Computing. (1998). Fiscal 1998 Overall assessment report Achievements and Issues, Center for Educational Computing. Retrieved January 29, 2017, from http://www.cec.or.jp/e-cec/CEC_houkokusyo.html
Center for Educational Computing. (2009). Regarding institutional issues in the “Information Navigation Age”: A survey on information education in high schools and others. Tokyo, Japan: Hitachi Consulting Co., Ltd.
Japan Network Information Center. (2015). The Internet Timeline. Retrieved January 29, 2017, from https://www.nic.ad.jp/timeline/en/
Joint Task Force for Computing Curricula. (2005). Computing Curricula: The Overview Report covering Undergraduate Degree Programs in Computer Engineering. United States of America: ACM/AIS/IEEE-CS.
Lee, P. (2016, March 25). Learning from Tay’s introduction. Retrieved March 25, 2016, from https://blogs.microsoft.com/blog/2016/03/25/learning-tays-introduction/
Minpyo, H., & Jinho, K. (2016). Jinkochino ha Goban no Yume wo Miruka? Alpha Go vs Lee Sedol. Tokyo Sogensha.
Minpyo, H., & Jinho, K. (2016). Jinkochino ha Goban no Yume wo Miruka? Alpha Go vs Lee Sedol. Tokyo Sogensha. The original version is in Korean. (in Japanese: 洪 旼杓, 金 振鎬著 洪敏和訳「人工知能は碁盤の夢を見るか?」東京創元社 2016年)
Moor, J. H. (1985). What is Computer Ethics? Metaphilosophy, 16(4), 266–275.
Murakami, Y. (2005). Utilitarian Deontic Logic. In R. Schmidt, I. Pratt-Hartmann, M. Reynolds, & H. Wansing (Eds.), Advances in Modal Logic (Vol. 5, pp. 211–230). London: King’s College Publ.
Nakayama, Y., Nakano, Y., Kakuda, H., Kuno, Y., Suzuki, M., Wada, B. T., … Kakehi, K. (2015). Current Situation of Teachers Assigned for the Subject of “Information” at High-schools in Japan. Research Report Computer and Education (CE), 131(11), 1–9.
Numazaki, T., Nakaya, T., Murakami, Y., & Tatsumi, D. (2016). Comparison of Curriculum and Information Curriculum Standard (No. J07). Chiba Prefectural, Japan: Kashiwanoha High School.
OECD. (2016). Country Note Japan: Key findings. Programme for International Students Assessment (PISA). Results from PISA 2015. Retrieved from https://www.oecd.org/pisa/PISA-2015-Japan.pdf
OECD. (2016). PISA 2015 Context Questionnaires Framework. In OECD, PISA 2015 Assessment and Analytical Framework (pp. 101–127). OECD Publishing.
Okawa, S. (2016). Fukutu no Gishi. Kodansha. (in Japanese: 大川慎太郎「不屈の棋士」講談社現代新書 2016年)
Secko, D. M., Amend, E., & Friday, T. (2013). FOUR MODELS OF SCIENCE JOURNALISM: A synthesis and practical assessment. Journalism Practice, 7(1), 62–80. https://doi.org/10.1080/17512786.2012.691351
Tatsumi, T., Murakami, Y., & Otani, T. (2015). The Information Ethics Information in our Future. Information Education Symposium, 45–52.
Tatsumi, T., Murakami, Y., & Otani, T. (2016). The Role of Information Education in Artificial Intelligence and Robotics Society. Information Education Symposium 2016 Proceedings, 15–22.
Toyofuku, S. (2016). Awareness of ICT is the world’s lowest. Retrieved January 29, 2017, from http://i-learn.jp/archives/719
- Donald Rumsfeld 2002. “There are known knowns. These are things we know that we know. There are known unknowns. That is to say, there are things that we know we don’t know. But there are also unknown unknowns. There are things we don’t know we don’t know.” ↑