Artificial Intelligence (AI) started in the late 1950s, with one of its main challenges being Machine Translation (MT). Despite progress over the decades, no-one would say today that MT has been solved. Instead it was joined by other challenges involving human language, resulting today in a mixture of (sub)fields, including Natural Language Processing, Information Retrieval, and Speech Processing. Looking at their histories, two conclusions can be drawn: (1) much of language processing consists of representation transformations, between for example natural languages, syntactic representations, semantic ones, or others, and (2) every subarea has undergone a shift of methodology, when the inadequacy of linguistics-oriented hand-crafted rules and associated rule bases and knowledge structures like ontologies forces someone to realize that learning representation transformations automatically is more effective. In the new paradigm, the focus shifts to engineering techniques that produce larger, multiple-solution systems that are rarely exactly correct but that do not catastrophically fail. Eventually, the limit of representation expressiveness and transformation power is reached, and the subfield ‘matures’: progress slows down, commercialization occurs, and research funding disappears. This has happened to speech recognition (Apple’s Siri), information retrieval (Google and Yahoo!), and is close to being finalized with MT (Google Translate and others), Information Extraction (several small companies), Text Summarization and QA (some small companies), NL access to databases (parts of commercial packages).
One can (re)interpret the state of most subareas of AI through the same historical lens. Many of them have experienced a parallel evolution and paradigm shift. The more ‘mature’ branches of AI, including Robotics and Scheduling, are all evaluation-driven engineering and offer commercially available solutions that work acceptably but not perfectly. The less ‘mature’ branches, such as Knowledge Representation, are almost all still working in the pre-automation/learning paradigm, require long apprenticeship training of students and postdocs in the ‘art’ of the area. For them, evaluations are scarce or are contrived, and engineering is much less developed.
If Newell and Simon were correct, and success in AI is indeed mostly the problem of choosing the most appropriate representation, then AI researchers should become skilled in styles and powers of different representation styles (from connectionist to distributional/deep to symbolic); methods of performing transformations on them (from manual rules using finite state automata to various forms of machine learning to neural networks); the kinds of resources required for each style (from basic collections of elements such as lexicons, to corpora used for training, with the attendant challenge of annotation, to the kinds of information best suited for unsupervised learning); and techniques and nuances of evaluation, including sophisticated statistical reasoning and experimental design. Few students today are trained this way, which is a problem for both AI and language processing.
Eduard Hovy is a member of the Language Technology Institute in the School of Computer Science at Carnegie Mellon University. He holds adjunct professorships at universities in China and Canada, and is co-Director of Research for the DHS Center for Command, Control, and Interoperability Data Analytics, a distributed cooperation of 17 universities. Dr. Hovy completed a Ph.D. in Computer Science (Artificial Intelligence) at Yale University in 1987, and was awarded an honorary doctorate from the National Distance Education University (UNED) in Madrid in 2013. From 1989 to 2012 he directed the Human Language Technology Group at the Information Sciences Institute of the University of Southern California. Dr. Hovy’s research addresses several areas in Natural Language Processing, including machine reading of text, question answering, information extraction, automated text summarization, the semi-automated construction of large lexicons and ontologies, and machine translation. His contributions include the co-development of the ROUGE text summarization evaluation method, the BLANC coreference evaluation method, the Omega ontology, the Webclopedia QA Typology, the FEMTI machine translation evaluation classification, and a model of Structured Distributional Semantics. Dr. Hovy is the author or co-editor of six books and over 300 technical articles and is a popular invited speaker. In 2001 Dr. Hovy served as President of the Association for Computational Linguistics (ACL) and in 2001–03 as President of the International Association of Machine Translation (IAMT). Dr. Hovy regularly co-teaches courses and serves on Advisory Boards for institutes and funding organizations in Germany, Italy, Netherlands, and the USA. |
Home page: Eduard Hovy@CMU
|The ability of managing vast amount of data, both structured and unstructured, joined with the advancement of Artificial Intelligence on building sophisticated self-learning systems, make it possible to automate increasingly complex tasks, resulting in high economic impact on knowledge-based jobs. The last two years have witnessed enormous advances of cognitive computing coupled with an acceleration of the innovation rate in the area. In this talk, we start with an historical overview, and then we move on the frontier of the technological innovation applied to knowledge-based jobs. Next, we will discuss how to leverage on the technological innovation of the IT to drive, rather than to follow, the process of automation and change of the knowledge-based jobs.|
Fabrizio Renzi is currently the technology & innovation director for IBM Italy, prior to that he was doing the same for IBM Italy FSS Clients.
In 2009-2010, based in Dubai, he was technical director for IBM, System & Technology Group, for emerging markets (Russia, Eastern Europe, MiddleEast & Africa). Since 1997 he covered several managerial jobs in innovation areas such as ecommerce, ebusiness, network computing solutions, client server.
He joined IBM in 1990 as System Engineer responsible for introducing in Italy new products and technologies from IBM R&D USA labs, where he spent in assignment some years of his professional life. He performed as keynote speakers in several scientific and technical events and he taught as contract professor in some Italian universities.
Born in 1964, he has a Master Degree in Electronic Engineering from Politecnico di Milano, and International Executive Program MBA from INSEAD Paris and Singapore.
Home page: Fabrizio Renzi@LinkedIn
Home page: Luigia Carlucci Aiello@Sapienza
Smart Cities is one of the biggest emerging paradigms of the beginning of the 21st century. For individuals as well as for economical actors, should it be industries or other commercial entities, cities hold the promise of increased wealth. It is also true in terms of education, research and development, where the concentration and hybridation of skills, knowledge and entrepreneurial behaviours leads to reduce the time needed to imagine, design and deploy innovations.
The growth of these urban areas comes with a lot of issues that have been highlighted since last century. Housing, utilities, transports, environment or security are some of them, and led to many sophisticated technical answers, intended to provide perfect solutions to cure and overcome identified problems.
But what about the people? Is it enough to build smart citizens? Even if global wealth is improved, it remains necessary that our cities will be fitted to provide also the conditions of developing happy and empowered residents. How to guarantee a wide social inclusion, and a robust, resilient organization? Then the concept of the city as a platform make sense, since it allows all the urban actors to build, customize or hack this platform to find quickly a solution to evolve in the right way.
However, it is not because you can that you will. Technological transition will have to go along with economical and societal transition. New behaviours and new business models will have to emerge, and will have to prove their efficiency. In this vision technology is no longer only about building solutions, but about providing the necessary framework for innovation and dissemination.
Gilles Betis leads the Urban Life and Mobility action line of EIT ICT Labs (www.eitictlabs.eu), focusing on emergence of new usages and business models in the field of Urban Mobility and Collaborative Behaviours . He is also chair of the IEEE Smart Cities Initiative (smartcities.ieee.org), whose aim is to create a worldwide network of cities, sharing their knowledge, visions, experiences and concerns, supporting education, creation of MOOCs and organization of international conferences.
Since the end of the eighties, Gilles has been involved in Thales in the design of complex systems, first in the field of military air-defence and then in transportation systems. He has an extensive industrial experience in IT transportation systems (e-ticketing, road tolling, passenger information, integrated communication and supervision) anytime in an international and multi-industrial environment. At the time he joined EIT ICT Labs, he was Smart City and Mobility Solution Leader in Thales Communication and Security.
Holding positions of product line manager, marketing manager and solution leader, he has been constantly involved with prospective, innovation and product design matters. Through a holistic systemic approach, his goal was always to link up emerging behaviours and societal needs to innovative technological solutions, allowing a smooth adoption by final users.|
Gilles Betis lives in Paris area, and is an Engineer graduated in 1987 from Ecole Supérieure d’Electricité in France.
Home page: Gilles Betis@LinkedIn