Prof. Seppo J. Sirkemaa, University of Turku, Finland
Dr Seppo Sirkemaa works at Turku School of Economics at the University of Turku. His office is at University Consortium of Pori. Dr Sirkemaa holds a Ph.D. and a master degree in information systems science from the Turku School of Economics and Business Administration. Dr Sirkemaa has worked at various academic positions, as professor of information systems management and as a research professor at Turku School of Economics and Business Administration. Dr Sirkemaa has been the vice director of Turku School of Economics and Business Administration, Pori Unit during 2003–2008. He has acted as professor in Entrepreneurship at University of Turku. Dr Sirkemaa has participated in several national and international research projects. He has also been active with conferences, for example as conference chair of the 14th International Conference on Telework – ITA2009. He has guided numerous thesis projects, including some doctoral theses. He is active reviewer for several scientific academic journals and committee member of international conferences. Dr Sirkemaa has published over 90 academic publications.
Speech Title: Key Information Systems Management Tasks - Ensuring Robustness
Abstract: Information systems development is important in organizations. The challenge is to provide information systems and infrastructures that are a robust for business activities and processes. In this presentation, a framework that identifies key areas in information systems management and development is suggested. Firstly, there are three main perspectives to information systems management. Secondly, it is vital to map key domains and activities in organizations information systems, so that systems would be operational even in unexpected situations. Expertise and skills in information systems management are also needed in working together within different stakeholders, to provide a flexible information systems and infrastructures in the organization.
Prof. José Santos, University of A Coruña, Spain
José Santos obtained an MS degree in Physics (specialization in Electronics) from the University of Santiago de Compostela, Spain, in 1989, and a Ph.D. from the same University in 1996 (specialization in Artificial Intelligence). He is currently an Associate Professor in the Department of Computer Science at the University of A Coruña (Spain). His research interests include artificial life, neural computation, evolutionary computation, autonomous robotics and computational biology. In the last years his research was focused on computational biology, applying all the knowledge acquired in the other research lines to the computational modelling of biological problems.
Prof. J. Santos has published papers regarding the different research lines in JCR journals of the first quartiles, as well as numerous works in top-tier conferences in these research areas, participating in the organization of events in international conferences, like IEEE-CEC, ESANN, PPSN or GECCO. He is editorial board member of journals like International Journal of Advanced Robotic Systems, Computer and Information Science, Artificial Intelligence Research or International Journal of Swarm Intelligence Research, as well as reviewer in more than 40 journals. He was program committee member in numerous conferences, adding up to more than 100 international representations. Prof. J. Santos has also participated as proposal evaluator and monitor of the FP7/H2020 EU Framework Programme for Research and Innovation.
Speech Title: Natural computing approaches to model the dynamic protein folding process
this talk it will be considered the use of natural computing
methods to model the protein folding process. This is different
to the ample research performed on Protein Structure Prediction
(PSP), which only considers an optimization or search problem to
directly determine or predict the final folded structure.
The most difficult “ab initio” prediction only considers the primary protein structure information to obtain the final folded structure. Many bio-inspired methods, especially evolutionary algorithms, were used to tackle this problem, using simplified lattice (and off-lattice) models for the protein conformation representation.
On the contrary to that PSP problem, in the talk it will be discussed the use of Cellular Automata (CA) schemes, adapted to the problem, to model the folding of a protein through time. These CA will be able to define the moves and interactions of each amino acid to obtain a folded protein conformation. Discretized lattice models for the spatial protein representation as well as more detailed models that provide a representation of the atomic positions can be used.
Therefore, the folding would be modeled as an emergent behavior, changing the dominant methodology which uses bio-inspired methods (or other search methods) for the PSP problem. This modeling will lead to a better understanding and characterization of the underlying biological process involved.
In the talk it will be considered the use of connectionist systems for implementing CA, which allows a generalization and extension of classical CA, incorporating the advantages of connectionist models and the possibility of using additional characteristics of the amino acids beyond the binary information of basic lattice models. Moreover, it will be discussed the use of evolutionary computing methods to tackle the multimodal energy landscapes of the protein folding and for automatically obtaining the "neural-CA" that provide the folding.
Prof. Fillia Makedon, The University of Texas at Arlington, USA
Fillia Makedon is Distinguished Professor of the Department of Computer Science and Engineering at the U. of Texas Arlington (UTA) and Director of the Heracleia Human Centered Computing Lab (heracleia.uta.edu). Her Ph.D. is from Northwestern University (1982). She has done pioneering work in the areas of Human Computer Interaction, Machine Learning and applications in Assistive Technologies. She has received major funding from the US National Science Foundation in studying human performance. She is author of over 300 peer-review papers, chairs the PETRA conference (www.petrae.org). She is director of the iperform industry-university National Science Foundation Center (iperform.uta.edu). She directs the Heracleia Human Centered Computing Laboratory (heracleia.uta.edu) and supervises 9 Ph.D. students, MS and undergraduates on projects that include human robot collaboration (HRI), simulation studies using multisensing for human monitoring and predicting, and building technological innovations that improve the quality of life.
Speech Title: iWork, A Smart Robot Service for Vocational Assessment, Personalized Training and Rehabilitation
Abstract: The need for personalized vocational training, rehabilitation and accurate job-matching is essential to ensuring a strong manufacturing sector, vital to economic development and ability to innovate. Automation and the increasing use of robots replacing human jobs, stress the need for a major shift in vocational training practices to prepare workers for the factory of the future using robots, starting as early as high school. In particular, Vocational Safety training using the latest technologies is imperative, as thousands of workers lose their job or die on the job each year due to accidents, unforeseen injuries and lack of training. The iWork project, is an interdisciplinary project funded by the US National Science Foundation. The goal is to develop a smart robot-based vocational assessment and intervention service system to assess physical, cognitive and collaboration skills while a worker performs simulated manufacturing tasks. The system uses advanced computational methods to collect and analyze multisening human-robot collaboration data. It uses a modular, easy to customize closed loop “behavior discovery” data flow that has four phases, assessment, recommendation, intervention, and evaluation, with a human factors expert in the loop. After each cycle, the system recommends personalized interventions for vocational training and rehabilitation, with the assistance of the expert. The iWork system is to assess and train both humans and workassistive robots as they collaborate and produce personalized and low-cost vocational training solutions that have huge economic and societal impact and affect economic sectors such as, energy, transportation, mining, electronics, robotics, and others. It would impact many traditional disciplines (e.g., education, social sciences, psychology, computer science, rehabilitation, robotics and others). It has the potential to impact millions of persons seeking a manufacturing job, including those facing a type of learning or aging disability.
Prof. Francisco Moo-Mena, Universidad Autónoma de Yucatán (UADY), Mexico
Francisco Moo-Mena is a Professor in Computer Science at Universidad Autónoma de Yucatán (UADY), in Mérida, Mexico. From the Institut National Polytéchnique de Toulouse, in France, he received a Master Degree in Computer Science and a PhD (graduated with honors), in 2003 and 2007, respectively. He also received another Master Degree in Distributed Systems from the Instituto Tecnológico y de Estudios Superiores de Monterrey, Mexico, in 1997. He received a BS in Computer Systems Engineering from the Instituto Tecnológico de Mérida, Mexico, in 1995. In 2002, he spent a summer at INRIA-Rhône Alpes, in France, collaborating as an visiting researcher for developing some communication protocols to use in heterogeneous robotics environments.
Between 2005 and 2007, he was an active researcher in the European project Web-Service Diagnosability, Monitoring & Diagnosis (WS-DIAMOND), which was one of the very first projects related to Self-Healing Web Services. Since 2012, he heads the Distributed and Parallel Systems Group at UADY. Dr. Moo-Mena has published many papers in conferences and journals specialized in Distributed and Parallel Systems. His research interests include Autonomic Computing, Self-healing systems, Web services architectures, Semantic Web services, Data Science, Cloud Computing and Parallel Systems.
Speech Title: Reviewing QoS Ontologies for Web Services
Abstract: Web services technology considers the development of applications based on the Service Oriented Architecture (SOA) model. In the same way, Web services allow the construction and integration of heterogeneous distributed applications. To this end, various standards were proposed in order to guarantee the goals established with this technology. Regarding the SOA model, a key actor in this architecture corresponds to the registry of services, in which on the one hand the providers publish their services, and on the other hand the consumers discover services that can satisfy their requests. The standard proposed as Web services technology to implement these functionalities was known as UDDI (Universal Description Discovery and Integration). However, this standard proposal has several limitations, one of the most important is its limited syntactic ability to characterize Web services. Because of that, UDDI did not have the expected acceptance, and was finally abandoned as a project. For this reason, many proposals for extending UDDI capabilities were presented in the literature, mainly considering semantic extensions based on quality of service (QoS) parameters for Web services, and designed using the concept of ontology. In this talk we will review the main proposed ontologies to model Web services considering their QoS parameters..
Prof. Katalina Grigorova, University of Ruse, Bulgaria
Katalina Grigorova holds MSc degree in Applied Mathematics from Moscow Power Engineering Institute and PhD degree in Computer Aided Manufacturing from University of Ruse. Currently she is a professor in Computer Science at University of Ruse. Her research interests include Data Science, Business Process Modeling, Automated Software Engineering, Databases, Data Structures and Algorithms Design, Programming. Dr. Grigorova is a leader of Research laboratory of Knowledge based software engineering at University of Ruse and has supervised a development of number of PhD dissertations. She has acted as committee member and reviewer for several international scientific conferences. As a member of National committee in Informatics, she is involved in run-up of national competitions in Informatics and trains talented students in their extracurricular activities. Several times she has been a leader of Bulgarian national teams participating in International and Balkan Olympiads in Informatics for secondary school students. Dr. Grigorova is a member of Association of Information systems (AIS) and its Bulgarian chapter BulAIS. She is a winner of IBM Faculty Award.
Speech Title: Process mining – a new approach to business process modeling and analysis
Abstract: The basic idea of business modeling is to offer different concepts on the business, and to present its structure and activities from different points of view. Describing business processes, organizations specify how they run their business. The business process modeling occupies increasingly more part of the factors influencing the business management. In recent years, industries in various fields develop techniques and tools that aim to improve and adapt business processes more effectively to the dynamically changing business objectives. Although there are many systems and standards for business process modeling, they basically offer a description of the pre-specified processes. Availability of tools for automated retrieval of business process models would benefit significantly, as software companies that develop integrated information system, as well as businesses that aim to achieve higher competitiveness. More and more data about business processes is recorded by means of information systems in the form of event logs, which can advantageously be used as input information for business process models retrieval. Process mining focuses on the discovery, control and improvement of real business processes. By analyzing the data generated by IT systems that support business processes, it is possible to obtain reliable and comprehensive information on how business processes work. Process mining algorithms are used to extract business process models using event logs. Many kinds of information can be collected about the process, such as control-flow, performance, organizational information and decision patterns. This would optimize business process intelligence and thus provide alternative and superior work strategies.
Dr. Andrea Marino, University of Pisa, Italy
Dr. Andrea Marino at Dipartimento di Informatica, University of Pisa. PhD degree in Computer Science at University of Florence. Research Fellow at University of Milan and Pisa. Author of a book, 25 conference papers, and 10 journal papers on graph algorithms with applications to enumeration, web crawling, bioinformatics, real-world graph analysis, information retrieval, mobile ad hoc networks, and computational linguistic.
Co-author of the current best algorithms to list all the paths, cycles, cliques and other popular patterns in graphs. One of the BUbiNG developers, which is the web crawler open-source with the highest performances (downloading, storing, and managing billions of web pages), developed at LAW (Laboratory of Web Algorithmic), University of Milan. Co-author of the current best algorithms to compute exactly: diameter, hyperbolicity, and top-central nodes (closeness centrality) in huge graphs. The diameter algorithm has been used to compute the diameter of the Facebook networks (1.2 billions of nodes). Collaborator of INRIA (Institut National de Recherche en Informatique et en Automatique) BAMBOO & BAOBAB Team, Université Claude Bernard (Lyon 1, France), designing ad hoc listing algorithms for metabolic networks and NGS (New Generation Sequence).
Speech Title: On computing the Diameter in Real-World Networks
Abstract: Complex networks such as telecommunication networks, computer networks, biological networks, cognitive and semantic networks, and social networks, represent distinct actors as nodes and the connections between these actors as links (or edges). Measuring topological properties of these networks can highlight phenomena going on in the environment they represent. We present an algorithm to compute the diameter of a network, which is a classical topological measure corresponding to the maximum distance among all the pairs of nodes, where the distance of a pair of nodes is the number of edges contained in the shortest path connecting these two nodes. Although its worst-case complexity is O(nm) time, where n is the number of nodes and m is the number of edges of the network, we
experimentally show that our algorithm works in O(m) time in practice, requiring few breadth-first searches to complete its task.