Artificial Intelligence (AI) started in the late 1950s, with one of its main challenges being Machine Translation (MT). Despite progress over the decades, no-one would say today that MT has been solved. Instead it was joined by other challenges involving human language, resulting today in a mixture of (sub)fields, including Natural Language Processing, Information Retrieval, and Speech Processing. Looking at their histories, two conclusions can be drawn: (1) much of language processing consists of representation transformations, between for example natural languages, syntactic representations, semantic ones, or others, and (2) every subarea has undergone a shift of methodology, when the inadequacy of linguistics-oriented hand-crafted rules and associated rule bases and knowledge structures like ontologies forces someone to realize that learning representation transformations automatically is more effective. In the new paradigm, the focus shifts to engineering techniques that produce larger, multiple-solution systems that are rarely exactly correct but that do not catastrophically fail. Eventually, the limit of representation expressiveness and transformation power is reached, and the subfield ‘matures’: progress slows down, commercialization occurs, and research funding disappears. This has happened to speech recognition (Apple’s Siri), information retrieval (Google and Yahoo!), and is close to being finalized with MT (Google Translate and others), Information Extraction (several small companies), Text Summarization and QA (some small companies), NL access to databases (parts of commercial packages).
One can (re)interpret the state of most subareas of AI through the same historical lens. Many of them have experienced a parallel evolution and paradigm shift. The more ‘mature’ branches of AI, including Robotics and Scheduling, are all evaluation-driven engineering and offer commercially available solutions that work acceptably but not perfectly. The less ‘mature’ branches, such as Knowledge Representation, are almost all still working in the pre-automation/learning paradigm, require long apprenticeship training of students and postdocs in the ‘art’ of the area. For them, evaluations are scarce or are contrived, and engineering is much less developed. If Newell and Simon were correct, and success in AI is indeed mostly the problem of choosing the most appropriate representation, then AI researchers should become skilled in styles and powers of different representation styles (from connectionist to distributional/deep to symbolic); methods of performing transformations on them (from manual rules using finite state automata to various forms of machine learning to neural networks); the kinds of resources required for each style (from basic collections of elements such as lexicons, to corpora used for training, with the attendant challenge of annotation, to the kinds of information best suited for unsupervised learning); and techniques and nuances of evaluation, including sophisticated statistical reasoning and experimental design. Few students today are trained this way, which is a problem for both AI and language processing. |
![]() Home page: Eduard Hovy@CMU |
The ability of managing vast amount of data, both structured and unstructured, joined with the advancement of Artificial Intelligence on building sophisticated self-learning systems, make it possible to automate increasingly complex tasks, resulting in high economic impact on knowledge-based jobs. The last two years have witnessed enormous advances of cognitive computing coupled with an acceleration of the innovation rate in the area. In this talk, we start with an historical overview, and then we move on the frontier of the technological innovation applied to knowledge-based jobs. Next, we will discuss how to leverage on the technological innovation of the IT to drive, rather than to follow, the process of automation and change of the knowledge-based jobs. |
![]() Home page: Fabrizio Renzi@LinkedIn |
Home page: Luigia Carlucci Aiello@Sapienza
Smart Cities is one of the biggest emerging paradigms of the beginning of the 21st century. For individuals as well as for economical actors, should it be industries or other commercial entities, cities hold the promise of increased wealth. It is also true in terms of education, research and development, where the concentration and hybridation of skills, knowledge and entrepreneurial behaviours leads to reduce the time needed to imagine, design and deploy innovations.
The growth of these urban areas comes with a lot of issues that have been highlighted since last century. Housing, utilities, transports, environment or security are some of them, and led to many sophisticated technical answers, intended to provide perfect solutions to cure and overcome identified problems. But what about the people? Is it enough to build smart citizens? Even if global wealth is improved, it remains necessary that our cities will be fitted to provide also the conditions of developing happy and empowered residents. How to guarantee a wide social inclusion, and a robust, resilient organization? Then the concept of the city as a platform make sense, since it allows all the urban actors to build, customize or hack this platform to find quickly a solution to evolve in the right way. However, it is not because you can that you will. Technological transition will have to go along with economical and societal transition. New behaviours and new business models will have to emerge, and will have to prove their efficiency. In this vision technology is no longer only about building solutions, but about providing the necessary framework for innovation and dissemination. |
![]() Gilles Betis lives in Paris area, and is an Engineer graduated in 1987 from Ecole Supérieure d’Electricité in France. Home page: Gilles Betis@LinkedIn |