Hello, and welcome to the wonderfully complex world of production ergonomics. This book is meant to
introduce engineering students, particularly in the area of production engineering, to the huge potential
of designing better industrial workplaces on the basis of a solid foundation of knowledge in ergonomics
(the scientific study of human work), also known as human factors. We have aimed to do this in a way
that is quickly accessible, comprehensive, and explained at various levels of detail depending on the
engineer’s future working role. In a teaching context, this book is best used as a reference companion
The MATSim (Multi-Agent Transport Simulation) software project was started around 2006 with the goal of generating traffic and congestion patterns by following individual synthetic travelers through their daily or weekly activity programme. It has since then evolved from a collection of stand-alone C++ programs to an integrated Java-based framework which is publicly hosted, open-source available, automatically regression tested. It is currently used by about 40 groups throughout the world. This book takes stock of the current status.
The future security, economic growth, and competitiveness of the United States depend on its capacity to innovate. Major sources of innovative capacity are the new knowledge and trained students generated by U.S. research universities. However, many of the complex technical and societal problems the United States faces cannot be addressed by the traditional model of individual university research groups headed by a single principal investigator. Instead, they can only be solved if researchers from multiple institutions and with diverse expertise combine their efforts. The National Science Foundation (NSF), among other federal agencies, began to explore the potential of such center-scale research programs in the 1970s and 1980s; in many ways, the NSF Engineering Research Center (ERC) program is its flagship program in this regard.
Many alcoholic beverages produced using various methods are consumed throughout the world. Alcoholic beverages made by brewing cereals, such as beer and Japanese sake, are extremely popular. Brewing them requires a complicated process by which the cereal must be saccharified using enzymes such as amylase.
This book contains the refereed proceedings of the 17th International Conference on Agile Software Development, XP 2016, held in Edinburgh, UK, in May 2016. While agile development has already become mainstream in industry, this field is still constantly evolving and continues to spur an enormous interest both in industry and academia. To this end, the XP conference attracts a large number of software practitioners and researchers, providing a rare opportunity for interaction between the two communities. The 14 full papers accepted for XP 2016 were selected from 42 submissions. Additionally, 11 experience reports (from 25 submissions) 5 empirical studies (out of 12 submitted) and 5 doctoral papers (from 6 papers submitted) were selected, and in each case the authors were shepherded by an experienced researcher. Generally, all of the submitted papers went through a rigorous peer-review process.
This book introduces a novel approach to the design and operation of large ICT systems. It views the technical solutions and their stakeholders as complex adaptive systems and argues that traditional risk analyses cannot predict all future incidents with major impacts. To avoid unacceptable events, it is necessary to establish and operate anti-fragile ICT systems that limit the impact of all incidents, and which learn from small-impact incidents how to function increasingly well in changing environments. The book applies four design principles and one operational principle to achieve anti-fragility for different classes of incidents. It discusses how systems can achieve high availability, prevent malware epidemics, and detect anomalies. Analyses of Netflix’s media streaming solution, Norwegian telecom infrastructures, e-government platforms, and Numenta’s anomaly detection software show that cloud computing is essential to achieving anti-fragility for classes of events with negative impacts.
Audio signal processing is a highly active research field where digital signal processing theory meets human sound perception and real-time programming requirements. It has a wide range of applications in computers, gaming, and music technology, to name a few of the largest areas. Successful applications include, for example, perceptual audio coding, digital music synthesizers, and music recognition software. The fact that music is now often listened to using headphones from a mobile device leads to new problems related to background noise control and signal enhancement. Developments in processor technology, such as parallel computing, are changing the way signal-processing algorithms are designed for audio. Topics covered, but were not limited to, the following areas: – Audio signal analysis – Music information retrieval – Enhancement and restoration of audio – Audio equalization and filtering – Audio effects processing – Sound synthesis and modeling – Audio coding – Sound capture and noise control – Sound source separation – Room acoustics and spatial audio – Signal processing for headphones and loudspeakers – High-performance computing in audio
Modern knowledge discovery methods enable users to discover complex patterns of various types in large information repositories. However, the underlying assumption has always been that the data to which the methods are applied to originates from one domain. The focus of this book, and the BISON project from which the contributions are originating, is a network based integration of various types of data repositories and the development of new ways to analyse and explore the resulting gigantic information networks. Instead of finding well defined global or local patterns they wanted to find domain bridging associations which are, by definition, not well defined since they will be especially interesting if they are sparse and have not been encountered before. The 32 contributions presented in this state-of-the-art volume together with a detailed introduction to the book are organized in topical sections on bisociation; representation and network creation; network analysis; exploration; and applications and evaluation.
This book constitutes the refereed proceedings of the 31st International Symposium on Computer and Information Sciences, ISCIS 2016, held in Krakow, Poland, in October 2016. The 29 revised full papers presented were carefully reviewed and selected from 65 submissions. The papers are organized in topical sections on smart algorithms; data classification and processing; stochastic modelling; performance evaluation; queuing systems; wireless networks and security; image processing and computer vision.
Technical Systems-of-Systems (SoS) – in the form of networked, independent constituent computing systems temporarily collaborating to achieve a well-defined objective – form the backbone of most of today’s infrastructure. The energy grid, most transportation systems, the global banking industry, the water-supply system, the military equipment, many embedded systems, and a great number more, strongly depend on systems-of-systems. The correct operation and continuous availability of these underlying systems-of-systems are fundamental for the functioning of our modern society. The 8 papers presented in this book document the main insights on Cyber-Physical System of Systems (CPSoSs) that were gained during the work in the FP7-610535 European Research Project AMADEOS (acronym for Architecture for Multi-criticality Agile Dependable Evolutionary Open System-of-Systems). It is the objective of this book to present, in a single consistent body, the foundational concepts and their relationships. These form a conceptual basis for the description and understanding of SoSs and go deeper in what we consider the characterizing and distinguishing elements of SoSs: time, emergence, evolution and dynamicity.