Tutorial 1: Collective Intelligence
by Dr. Epaminondas Kapetanios
The concept of Collective Intelligence (CI) has been defined in many different ways and its study has been periodically
considered as a subfield of sociology, business, biology, physics, where collective behaviour is the subject of
study. This can be found from the level of quarks to the level of bacteria, plant, animals to human societies.
With the rise of communications technology and, in particular, the Internet (Social Web, Web 2.0, Semantic Web), Collective Intelligence has been defined as a form of networking, which seeks to draw on enabling user generated content and enhance the pool of existing knowledge. To this extent, Collective Intelligence has been attributed to media
convergence and participatory culture. CI, however, is not only a quantitative contribution, but also a qualitative one.
In the context of an abstract computational space, CI is perceived as a multi-thread inference process and, therefore,
as a non-Turing model of computation. Given the rich history of computation as a Turing model and its evolution from
mainframes to personal computers, distributed computation and to personalisation of contents and interactive
participation of humans, this tutorial reflects on:
- philosophical and epistemological considerations of the concept of Collective Intelligence
- the history and the mathematical underpinnings and techniques associated with CI
- types of Collective Intelligence (e.g., group cognition, co-operation and collaborative software
development, co-ordination) and its computational context as a non-Turing model
- engineering aspects of how we build systems, which connect people and computers
in such a way that collectively act more intelligently than individuals, groups or computers
have done before
- programming with Collective Intelligence (e.g., CI based programming languages,
machine learning from user generated contents, collaborative knowledge and ontology engineering)
- current successes (e.g., Google, Wikipedia, bloggers and games communities) and future systems
and applications (e.g., climate collaboratorium, collective prediction of future risks)
Short biography: Dr. Epaminondas Kapetanios
Dr. Epaminondas Kapetanios studied Statistics and Informatics at the University
(Iconomicon Panepistimion) of Athens.
He received his M.Sc. in Information Systems, Institute of Program Structures and Data Organisation,
Faculty of Computer Science, Technical University of Karlsruhe, Germany. Epaminondas's Ph.D.
has been awarded by ETH-Zurich, Department of Computer Science, Institute of Information Systems.
He is currently holding a position as a Senior Lecturer at the School of Computer Science, University of
Westminster, London, UK. His research interests and contributions stretch upon a variety of computational
and system engineering approaches and techniques, where human
participatory culture has been a key aspect
as problem solving technique. To this extent, his theoretical and
technological achievements vary from
languages, automata theory, collective knowledge algebra and models, to
natural language based query
languages and cross-lingual information retrieval systems. He is
currently investigating forms of Collective
Intelligence as they apply to the Social and Semantic Web as well as
Collaborative Software Development
processes and Information Systems Engineering.
Epaminondas has published in peer reviewed journals such as Data & Knowledge Engineering and Information
Sciences, Elsevier Publisher. He is also member of the editorial review
board of the International Journal of
Technology and Human Interaction. He has also published peer reviewed
articles in conferences such as
NLDB, SSDBM, FQAS. He is a member of ACM and is currently acting as a
consultant for IT companies.
Tutorial 2: A Unified Knowledge Engineering with Language Engineering for Effectively Knowledge Management: CyberBrain as a Case Study
By Asanee Kawtrakul, Ph.D.
Accumulation of knowledge and management on certain topics is crucial for building an Intelligence Society. Knowledge Sources are divided into two different categories: Tacit Knowledge and Explicit Knowledge. Tacit Knowledge that people carry in their minds, such as the lessons learned from solving past problems and valuable information from previous experiences, are invaluable for knowledge sharing. With the development of the Internet and the World Wide Web, the enormous amount of explicit knowledge including best practices or experience on focus areas can be found and shared through writing research reports, visiting blogs, and even participating in Wikipedia. However, these sources of valuable knowledge are scattered over many different sources including human minds, and they come in many different formats. Moreover, desired information/knowledge is more difficult to access from scattered sources since search engines return ranked retrieval lists that offer little or no information on the semantic relationships among scattered information, and even when such information is found, it is often redundant or in excess volume since there is no content filtering or correct answer indicated. Accordingly, as we move beyond the concept of simple information retrieval and simple database queries, automatic content aggregation, question answering, and knowledge visualization become more important.
This tutorial introduces a framework called CyberBrain that unifies Knowledge Engineering and Language Engineering for effectively knowledge management. CyberBrain is a dynamic structure, interconnecting organization and communities. It behaves as a natural ecosystem for collecting and processing including extracting and aggregating the knowledge from both people minds and unstructured documents on the Internet. By exploiting the semantic links between problems, methods for solving them and the people who solve them, knowledge services could be provided as a “one-stop service”. This challenging platform needs both complex natural language processing, including deep semantic relation interpretation, and the collaborative intelligence which is the participation of the right stakeholders to create the community knowledge pool and contribute to both annotate problem-solving solutions scattered on the web and verify the ones that extracted by the question-answering system. Moreover, task-oriented ontology or semantic-based knowledge aggregating and organizing are needed for shortening the time it takes to consume the knowledge.
Asanee Kawtrakul is the Deputy Executive Director of NECTEC, the National Electronics and Computer Technology Center, National Science and Technology Development Agency, Ministry of Science and Technology and the Associate Professor in Language and Knowledge Engineering Technologies at Kasetsart University. She obtained her B.Eng (honors) and M.Eng in Electrical Engineering from Kasetsart University in Thailand and D.Eng in Information Engineering from Nagoya University, Japan. She is the leader of the Specialty Research Unit of Natural Language Processing and Intelligent Information System Technology (NaiST Lab.) at Kasetsart University. Her current research focuses primarily on unifying language processing technologies with Knowledge Engineering to support knowledge acquisition and management. She has led various large-scale research projects. Not only focusing in research, she, on behalf of NECTEC, works with the young researcher team at NECTEC and the Alliance Universities to bring the researches off the shelf and apply to the real problems solving for the industry, government, enterprise and social community. She has also initiated various collaboration effort in Thailand with FAO, UN agencies and other international institution such as NII, National Informatic Institute (under BIOCASTER Project) and Nagoya University, of Japan , University Joseph Fourier, GETALP, LIG-campus (Under Franco-Thai project), IRIT, Institut de Recherche en Informatique de Toulouse, and Laboratoire LE2I (UMR-CNRS), University of Bourgogne, of France. She has published more than 90 papers and books.