This paper expounds a way of seeing organizations that fits with second order cybernetics. It shows the relationship between cybernetics and radical constructivism and discusses the meaning of autonomy as it is used by cyberneticians. Implications for organizational members, managers, researchers and interventionists are elaborated.
Over the past two decades, vehicular engineering and design have made significant developments in driving assistance systems. The focus is gradually shifting from partial to full autonomy with the possibility of self-driving cars on the author’s roads becoming an absolute reality. This is an important transformation, as vehicular controls start to critically rely on a range of sensing and communication systems. Such dependencies also make vehicular platforms vulnerable to a multitude of cyber security threats, which have the potential to disrupt and damage vehicles alongside endangering driver and passenger safety, with only nascent security controls (derived, in the main, from safety techniques) to prevent or mitigate these dangers. This article critically examines some such threats and reviews the state-of-the-art of controls and countermeasures to counter such threats.
Doc 10 : A Perspective on Knowledge Based and Intelligent Systems Implementation in Industrie 4.0
Abstract A worldwide trend in advanced manufacturing countries is defining Industrie 4.0, Industrial Internet and Factories of the Future as a new wave that can revolutionize the production and its associated services. Cyber-Physical Systems (CPS) are central to this vision and are entitled to be part of smart machines, storage systems and production facilities able to exchange information with autonomy and intelligence. Such systems should be able to decide and trigger actions, and control each other independently and for such reason it is required the use of Knowledge based and intelligent information approaches. In this paper we present our perspective on how to support Industrie 4.0 with Knowledge based and intelligent systems. We focus in the conceptual model, architecture and necessary elements we believe are required for a real world implementation. We base our conceptualization in the experiences gathered during the participation in different ongoing research projects where the presented architecture is being implemented.
Doc 23 : Computers in control: Rational transfer ofauthority or irresponsible abdication of autonomy?
To what extent should humans transfer, or abdicate, ’’responsibility‘‘ to computers? In this paper, I distinguish six different senses of ’responsible‘ and then consider in which of these senses computers can, and in which they cannot, be said to be ’’responsible‘‘ for ’’deciding‘‘ various outcomes. I sort out and explore two different kinds of complaint against putting computers in greater ’’control‘‘ of our lives: (i) as finite and fallible human beings, there is a limit to how far we can acheive increased reliability through complex devices of our own design; (ii) even when computers are more reliable than humans, certain tasks (e.g., selecting an appropriate gift for a friend, solving the daily crossword puzzle) are inappropriately performed by anyone (or anything) other than oneself. In critically evaluating these claims, I arrive at three main conclusions: (1) While we ought to correct for many of our shortcomings by availing ourselves of the computer‘s larger memory, faster processing speed and greater stamina, we are limited by our own finiteness and fallibility (rather than by whatever limitations may be inherent in silicon and metal) in the ability to transcend our own unreliability. Moreover, if we rely on programmed computers to such an extent that we lose touch with the human experience and insight that formed the basis for their programming design, our fallibility is magnified rather than mitigated. (2) Autonomous moral agents can reasonably defer to greater expertise, whether human or cybernetic. But they cannot reasonably relinquish ’’background-oversight‘‘ responsibility. They must be prepared, at least periodically, to review whether the ’’expertise‘‘ to which they defer is indeed functioning as he/she/it was authorized to do, and to take steps to revoke that authority, if necessary. (3) Though outcomes matter, it can also matter how they are brought about, and by whom. Thus, reflecting on how much of our lives should be directed and implemented by computer may be another way of testing any thoroughly end-state or consequentialist conception of the good and decent life. To live with meaning and purpose, we need to actively engage our own faculties and empathetically connect up with, and resonate to, others. Thus there is some limit to how much of life can be appropriately lived by anyone (or anything) other than ourselves.
Doc 47 : Reports on the AAAI Spring Symposia (March 1999)
https://doi.org/10.1609/aimag.v21i2.1510 David J. Musliner Barney Pell Wolff Dobson Kai Goebel Gautam Biswas Sheila A. McIlraith Giuseppina Gini Sven Koenig Shlomo Zilberstein Weixiong Zhang
The Association for the Advancement of Artificial Intelligence, in cooperation, with Stanford University’s Department of Com-puter Science, presented the 1999 Spring Symposium Series on 22 to 24 March 1999 at Stanford University. The titles of the seven symposia were (1) Agents with Adjustable Autonomy, (2) Artificial Intelligence and Computer Games, (3) Artificial Intelligence in Equipment Maintenance Service and Support, (4) Hybrid Systems and AI: Modeling, Analysis, and Control of Discrete + Continuous Systems, (5) Intelligent Agents in Cyberspace, (6) Predictive Toxicology of Chemicals: Experiences and Impact of AI Tools, and (7) Search Techniques for Problem Solving under Uncertainty and Incomplete Information.
Doc 50 : Utilizing BDI Agents and a Topological Theory for Mining Online Social Networks
Online social networks OSN are facing challenges since they have been extensively applied to different domains including online social media, e-commerce, biological complex networks, financial analysis, and so on. One of the crucial challenges for OSN lies in information overload and network congestion. The demands for efficient knowledge discovery and data mining methods in OSN have been rising in recent year, particularly for online social applications, such as Flickr, YouTube, Facebook, and LinkedIn. In this paper, a Belief-Desire-Intention BDI agent-based method has been developed to enhance the capability of mining online social networks. Current data mining techniques encounter difficulties of dealing with knowledge interpretation based on complex data sources. The proposed agent-based mining method overcomes network analysis difficulties, while enhancing the knowledge discovery capability through its autonomy and collective intelligence.
In large-scale Internet-based distributed systems, participants (consumers and providers) are typically autonomous, i.e. they may have special interests towards queries and other participants. In this context, a way to avoid a participant to voluntarily leave the system is satisfying its interests when allocating queries. However, participants satisfaction may also be negatively affected by the failures of other participants. Query replication is a solution to deal with providers failures, but, it is challenging because of autonomy: it cannot only quickly overload the system, but also it can dissatisfy participants with uninteresting queries. Thus, a natural question arises: should queries be replicated? If so, which ones? and how many times? In this paper, we answer these questions by revisiting query replication from a satisfaction and probabilistic point of view. We propose a new algorithm, called S b QR, that decides on-the-fly whether a query should be replicated and at which rate. As replicating a large number of queries might overload the system, we propose a variant of our algorithm, called S b QR+. The idea is to voluntarily fail to allocate as many replicas as required by consumers for low critical queries so as to keep resources for high critical queries during query-intensive periods. Our experimental results demonstrate that our algorithms significantly outperform the baseline algorithms from both the performance and satisfaction points of view. We also show that our algorithms automatically adapt to the criticality of queries and different rates of participant failures.
Doc 79 : A Smart Insole to Promote Healthy Aging for Frail Elderly Individuals: Specifications, Design, and Preliminary Results
Older individuals frequently experience reversible “frailty syndrome,”, increasing incidence of disability. Although physical exercise interventions may delay functional decline, there are difficulties in implementing them and performing seamless follow-up at home. Very few technological solutions attempt to address this challenge and improve individual participation.Our objectives are to (1) develop a technological solution designed to support active aging of frail older persons, (2) conduct a first laboratory evaluation of the device, and (3) design a multidimensional clinical trial to validate our solution.We conducted a first phase of multidisciplinary meetings to identify real end users and health professional’s unmet needs, and to produce specifications for the architecture of the solution. In a second phase, we performed laboratory tests of the first proposed prototype (a smart insole) with 3 healthy volunteers. We then designed an ongoing clinical trial to finalize the multidimensional evaluation and improvement of the solution.To respond to the needs expressed by the stakeholders (frailty monitoring and adherence improvement), we developed a prototype of smart shoe insole to monitor key parameters of frailty during daily life and promote walking. It is a noninvasive wireless insole, which automatically measures gait parameters and transmits information to a remote terminal via a secure Internet connection. To ensure the solution’s autonomy and transparency, we developed an original energy harvesting system, which transforms mechanical energy produced by the user’s walking movement into electrical energy. The first laboratory tests of this technological solution showed good reliability measures and also a good acceptability for the users. We have planned an original iterative medical research protocol to validate our solution in real life.Our smart insole could support preventive strategies against disability in primary care by empowering the older patients without increasing the busy health professional’s workload.Clinicaltrials.gov NCT02316600; https://clinicaltrials.gov/ct2/results?term=NCT02316600&Search=Search. Accessed: 2015-05-13 . (Archived by WebCite at http://www.webcitation.org/6YUTkObrQ).
Assistive robotics aims at developing solutions (mechatronic devices, systems and technologies) to assist and interact with individuals with reduced motor or cognitive abilities in order to increase their autonomy in a personal environment. Rehabilitation robotics proposes similar solutions for assisted therapy and objective functional assessment of these patients usually in a clinical context [1].
Doc 141 : Loose Coupling Based Reference Scheme for Shop Floor-Control System/Production-Equipment Integration
Coupling shop floor software system (SFS) with the set of production equipment (SPE) becomes a complex task. Itinvolves open and proprietary standards, information and communication technologies among other tools andtechniques. Due to market turbulence, either custom solutions or standards based solutions eventually require aconsiderable effort of adaptation. Loose coupling concept has been identified in the organizational design communityas a compensator for organization survival. Its presence reduces organization reaction to environment changes. Inthis paper the results obtained by the organizational design community are identified, translated and organized tosupport the SFS-SPE integration problem solution. A classical loose coupling model developed by organizationalstudies community is abstracted and translated to the area of interest. Key aspects are identified to be used aspromoters of SFS-SPE loose coupling and presented in a form of a reference scheme. Furthermore, this referencescheme is proposed here as a basis for the design and implementation of a generic coupling solution or couplingframework, that is included as a loose coupling stage between SFS and SPE. A validation example with various setsof manufacturing equipment, using different physical communication media, controller commands, programminglanguages and wire protocols is presented, showing an acceptable level of autonomy gained by the SFS.
Doc 145 : Second Order Cybernetics and Enactive Perception
Purpose – To present an account of cognition integrating second‐order cybernetics (SOC) together with enactive perception and dynamic systems theory.Design/methodology/approach – The paper presents a brief critique of classical models of cognition then outlines how integration of SOC, enactive perception and dynamic systems theory can overcome some weaknesses of the classical paradigm.Findings – Presents the critique of evolutionary robotics showing how the issues of teleology and autonomy are left unresolved by this paradigm although their solution fits within the proposed framework.Research limitations/implications – The paper highlights the importance of genuine autonomy in the development of artificial cognitive systems. It sets out a framework within which the robotic research of cognitive systems could succeed.Practical implications – There are no immediate practical implications but see research implications.Originality/value – It joins the discussion on the fundamental nature of cognitive systems …
Doc 155 : Explaining Biological Functionality: Is Control Theory Enough?
It is generally agreed that organisms are Complex Adaptive Systems. Since the rise of Cybernetics in the middle of the last century ideas from information theory and control theory have been applied to the adaptations of biological organisms in order to explain how they work. This does not, however, explain functionality, which is widely but not universally attributed to biological systems. There are two approaches to functionality, one based on etiology (what a trait was selected for), and the other based in autonomy. I argue that the etiological approach, as understood in terms of control theory, suffers from a problem of symmetry, by which function can equally well be placed in the environment as in the organism. Focusing on the autonomy view, I note that it can be understood to some degree in terms of control theory in its version called second order cybernetics. I present an approach to second order cybernetics that seems plausible for organisms with limited computational power, due to Hooker, Penfold and Evans. They hold that this approach gives something like concepts, certainly abstractions from specific situations, a trait required for functionality in its system adaptive form (i.e., control of the system by itself). Using this cue, I argue that biosemiotics provides the methodology to incorporate these quasi concepts into an account of functionality.
Doc 165 : Design and Development of an M-Commerce Environment: The E-CWE Project
We discuss the use of software agents in the design and development of an m-commerce environment. Software agents are considered because of their features, such as autonomy, sociability, and mobility. Users are currently struggling to complete their e-commerce transactions. For instance, they have to adapt their behaviors when they browse e-commerce sites. Conducting similar transactions from wireless devices (e.g., mobile phones) requires new approaches. Multiple issues, which vary from low bandwidth and high latency to screen sizes, are raised. The E-Commerce Through Wireless Devices project aims at investigating techniques and offering solutions to support users in undertaking m-commerce transactions.
Doc 170 : E-commerce oriented software agents: Towards legal programming: a legal analysis of ecommerce and personal assistant agents using a process/IT view of the firm
Abstract Agent-based technologies and processing may answer some of the legal difficulties raised by traditional online commerce, introducing elements of compliance, control, flexibility and personalisation. However as they mediate commercial relationships with third parties, software agents in turn raise new legal difficulties, while potentially heightening user fears and mistrust. The autonomy, adaptivity and interactivity of agents, combined with the advent of ubiquitous computing, introduce a new set of legal dimensions including the liability of agent users, the automation of notification and consent or the attribution of responsibility, as well as new fears for users. This article presents a process oriented analysis of agent activities, within the context of augmented reality: the application of Internet technologies to the real world, specifically in this case to supermarket shopping. Specific areas of difficulty are contract and consumer protection law, as well as privacy. These topics are highlighted, together with trust issues raised in Multi-Agent Systems which will be discussed in a later article in this series.
Doc 174 : Trusted autonomic service cooperation model and application development framework
To achieve the dynamical on-requirement self-organization and self-evolution of virtual organizations (VOs) by autonomic service cooperation is an excellent approach for developing assembled service-oriented application software systems in the Internet computing environment. However, this approach, due to the fact that the autonomic individual behaviors are difficult to be predicted and controlled, encounters the “trust” crisis of cooperation effect. In order to solve the above crisis, this paper proposes a model of Norm-Governed and Policy-Driven autonomic service cooperation (NGPD). The key idea of NGPD is to constrain and govern the cooperation behaviors and their evolutions of autonomic individuals by formulating systematic standards of social structures and the coupling norms of cooperation behaviors, and thereby the cooperation behaviors (i.e. behaviors for providing and requiring services) of autonomic individuals and the cooperation effect can be controlled, predicted, and then become trusted. Furthermore, NGPD provides the “macro-micro” link mode to support the operation-level implementation of macro-government and creates the policy-driven self-management mechanism for individual behaviors to achieve the mapping from the macro-government to the micro-behaviors. Thus, the effect of the macro-government can be exerted to autonomic individuals so that they can exhibit the intellect for conforming to service contracts and cooperation behavior norms, but still keep high autonomy again. Along with the settlement of this “trust” crisis, NGPD can overcome the limitation introduced by non-autonomic service cooperation, and thus make the autonomy and change-response ability of service cooperation exhibit the advantages of robustness and intelligence which cannot be reached by traditional service cooperation techniques. Furthermore, NGPD also establishes the solid foundation for developing the norm-driven and contract-ensured self-organization of hierarchical cooperation and the cooperation self-adaptation and self-evolution driven by contract-performing circumstance. All of these make the service cooperation-based VOs possess high performance of dynamical on-requirement self-organization and self-evolution.
The aim of this paper is to propose a modeling of corporate knowledge in cyberworlds. An enterprise is considered in the framework of multiagent methodology as a distributed computational system. The Agent-Oriented Abstraction paradigm was proposed earlier to describe in a fully generic way agents and societies of agents. In this paper, we are investigating the application of this paradigm to the abstract modeling of corporate knowledge, extending the scope of traditional knowledge management approaches. We show that such an abstraction mechanism leads to very practical applications for cyberworlds whether on the web or on any other medium. Our approach covers the broader possible scope of corporate knowledge, emphasizing the distributivity and autonomy of agents within cyber systems. This approach can be further used to better simulate and support knowledge management processes.
Doc 183 : Rethinking research management in Colombia
https://doi.org/10.1108/03684920710747011 Roberto Zarama Alfonso Reyes Eduardo Aldana Jorge Villalobos Juan Camilo Bohorquez Juan Pablo Calderón Alonso Botero Nelson L. Lammoglia José-Luis Villaveces Luis A. Pinzón Ricardo Bonilla Andrés Mejía José David Bermeo Isaac Dyner Neil F. Johnson Juan Alejandro Valdivia
– This paper seeks to present a proposal to change the form in which knowledge is produced in Colombia., – Discusses the key issue – to transform the way in which the production of knowledge is currently taking place at the university level., – To be able to increase the production of knowledge in this country there is a need to create bonds among industrial, governmental, and academic institutions. It is believed that this can be done by the development of a system capable of continuously forming researchers at a doctoral level., – The paper puts forward a proposal for the construction of such a system based on the developments of organizational cybernetics. The proposal is based on the concept of autonomy which is crucial to solve this problem.
Doc 195 : Recursive modeling of loss of control in human and organizational processes: A systemic model for accident analysis
A recursive model of accident investigation is proposed by exploiting earlier work in systems thinking. Safety analysts can understand better the underlying causes of decision or action flaws by probing into the patterns of breakdown in the organization of safety. For this deeper analysis, a cybernetic model of organizational factors and a control model of human processes have been integrated in this article (i.e., the viable system model and the extended control model). The joint VSM-ECOM framework has been applied to a case study to help safety practitioners with the analysis of patterns of breakdown with regard to how operators and organizations manage goal conflicts, monitor work progress, recognize weak signals, align goals across teams, and adapt plans on the fly. The recursive accident representation brings together several organizational issues (e.g., the dilemma of autonomy versus compliance, or the interaction between structure and strategy) and addresses how operators adapt to challenges in their environment by adjusting their modes of functioning and recovery. Finally, it facilitates the transfer of knowledge from diverse incidents and near misses within similar domains of practice.
Doc 203 : A taxonomy of decentralized online social networks
Despite their tremendous success, centrally controlled cloud-based Online Social Networks (OSNs) have inherent issues related to user privacy and control. These issues have motivated researchers to make a paradigm shift in the OSN architecture by proposing to replace centrally controlled OSNs with Decentralized OSNs (DOSNs) in a peer-to-peer setting. DOSNs give users more autonomy and the chance to participate in social networks without loosing control over their data. The various DOSN proposals have significant differences in their proposed services, architecture and extent of decentralization. In this survey, we study a number of proposals for peer-to-peer DOSNs, distil a set of criteria to compare them, and provide a taxonomy for their comparison.
Doc 208 : ‘Long autonomy or long delay?’ The importance of domain in opinion mining
Nowadays, people do not only navigate the web, but they also contribute contents to the Internet. Among other things, they write their thoughts and opinions in review sites, forums, social networks, blogs and other websites. These opinions constitute a valuable resource for businesses, governments and consumers. In the last years, some researchers have proposed opinion extraction systems, mostly domain-independent ones, to automatically extract structured representations of opinions contained in those texts. In this work, we tackle this task in a domain-oriented approach, defining a set of domain-specific resources which capture valuable knowledge about how people express opinions on a given domain. These resources are automatically induced from a set of annotated documents. Some experiments were carried out on three different domains (user-generated reviews of headphones, hotels and cars), comparing our approach to other state-of-the-art, domain-independent techniques. The results confirm the importance of the domain in order to build accurate opinion extraction systems. Some experiments on the influence of the dataset size and an example of aggregation and visualization of the extracted opinions are also shown.
Doc 219 : Business-to-business interactions: issues and enabling technologies
Business-to-Business (B2B) technologies pre-date the Web. They have existed for at least as long as the Internet. B2B applications were among the first to take advantage of advances in computer networking. The Electronic Data Interchange (EDI) business standard is an illustration of such an early adoption of the advances in computer networking. The ubiquity and the affordability of the Web has made it possible for the masses of businesses to automate their B2B interactions. However, several issues related to scale, content exchange, autonomy, heterogeneity, and other issues still need to be addressed. In this paper, we survey the main techniques, systems, products, and standards for B2B interactions. We propose a set of criteria for assessing the different B2B interaction techniques, standards, and products.
Doc 227 : DEVELOPMENT OF AN ORGANIZATIONAL MODEL THAT BRIDGES THE ORGANIZATION-PRACTICE BARRIER IN OVERUSE INJURY PREVENTION: AN ACTION RESEARCH STUDY IN SWEDISH ATHLETICS
Background Sports organisations and sportspersons often suffer from an “autonomy-gap” obstructing implementation of interventions against overuse injuries. This obstruction has been described as the organisation-practice (OP) barrier. Objective To develop an inter-organisational sports safety promotion model targeted at prevention of overuse injuries by identifying initiatives that lastingly will bridge the OP barrier. Design Participatory action research with focus group interviews using the quality function deployment (QFD) technique. Setting Swedish athletics involving the Swedish Athletics Association (SAA), clubs, athletes, coaches, and other stakeholders in the sport. Participants Representatives of Swedish athletics; officials from different divisions of the SAA, coaches, athletes, medical personal, club representatives and parents to youth athletes (year 1: n=12; year 2: n=35). Qualitative data collected and analyzed Sports needs (Voice of Sports), organizational requirements, organizational changes. Main qualitative outcomes Organizational facilitators of sports safety promotion targeted at prevention of overuse injuries. Results An organic sports safety model resulted from the transformation of identified sports needs and organizational requirements into organisational changes. In this model, a dynamic and constantly active organisation-practice membrane reaches across the OP barrier. This organizational membrane consists of an infrastructure supporting continuous exchange of data and information between different actors in athletics. The central component is an Internet-based safety surveillance system covering Swedish athletics as a whole. A ‘Safety Forum’ is used for safety discussions between athletes, researchers, and SAA representatives, and a ‘Coaches’ Corner9 allows communication between experienced coaches and younger colleagues. Conclusions An inter-organizational sports safety model facilitating collaboration between sports organisations and sportspersons was developed in the setting of Swedish athletics. The principal feature in the model is an infrastructure supporting continuous exchange of data and information between different actors. Further research is warranted to prospectively evaluate the effectiveness of the novel model in overuse injury prevention.
Doc 236 : Introduction to the special issue on agent autonomy in groups
This special issue of the Connection Science journal features ten papers on agent autonomy. The introduction describes the motivation for the special issue and briefly overviews the contributions. The papers in this volume are revised and extended version of selected papers from a workshop that was held in July 2002, in Edmonton, Canada, in conjunction with AAAI 2002 conference. This workshop followed an IJCAI 2001 workshop with the same title. Autonomy is a characterizing notion of agents but a one-size fit all definition does not exist. The desire to build agents that exhibit a satisfactory quality of autonomy has included agents that have a long life, are highly independent, can harmonize their goals and actions with humans and other agents, and are generally socially adept. In 2002, our workshop focus was not only to continue to understand salient social notions in agent interaction that involve autonomy but scaling issues of social networks on inter-group interactions. We explored theories that synthesized the inter-agent interaction into unified models as well as derived and implied attitudes that are beyond the immediate and direct inter-agent attitudes, which play a big role in balance of attitudes among agents in a group. As in 2001, we had researchers in multiagency as well as human-agent and human-robot interactions. In agent-agent interaction, the agents are designed to change their interaction in order to optimize local qualities such as cost or system qualities such as coherence. In organized groups, agents are designed to model the organizational structure and the concerns are mostly with deontic concepts. It is clearly important for agents to understand human preferences and guidance in complex systems. This involves many issues from abilities to comprehend issues related to delegation in natural language to understanding human emotions. Presentations covered several space systems and a large naval application. Discussions of applied research motivated the need for agents to explicitly reason about autonomy and delegation. Although there is a need for increased autonomy on the part of agents, there are times when autonomy is harmful. For instance, when agents are fully autonomous and their actions are un-interruptible. Humans in the loop of such agents may come to harm if they cannot interrupt actions of agents. Also, if agents have negative influence on one another, their independence may detract from harmonious interaction. When tasks among agents are coupled, cooperating agents need to take one another’s actions into account and not be self-interested. This workshop has contributed to understanding of social agent interactions. Jean-Claude Martin (2002, this issue) introduces TYCOON as a framework for the analysis of human verbal and non verbal behaviour. This system is used in making sense of multimodal human-computer interaction and human-human communication. This work is similar to digital Ethnography that allows study of intensely interactive collaboration at the level of both language and physical coordination in time and space (Hutchins, 1995). This is an ongoing project and agents could be endowed with more autonomy by spontaneously volunteering information to participants. Jean-Michel Hoc (2000, this issue) presents issues of dynamic allocation of activities in cooperation between human operators and autonomous machines in complex environments. He suggests that a different consideration of decomposition of the overall task into subtasks is needed for humans and automated agents. He gives examples from a series of studies on human-machine cooperation in air traffic control in order to illustrate his argument. Among his main points are (a) tasks be defined with intentions, (b) mutual monitoring of humans and agents is needed, and (c) knowledge and plans should be shared among humans and agents. Connection Science 2003 2 McCauley and Franklin (2002, this issue) approach present a real world massive MAS with approximately 350,000 individual agents that addresses a problem in the US Navy for assigning jobs to sailors at the end of each sailor’s tour of duty. They have developed a cognitive agent capable of reasoning about autonomy of sailors and navy human detailers. They go on to discuss major issues regarding the design, interaction, and autonomy of the various agents involved. Castelfranchi and Falcone (2002 this issue) suggest relationships between trust and control in agents. They claim that the basic form of dyadic trust between two agents is the opposite of the notion of dyadic control between those agents. However, the more general notion of trust relies on control between agents. Several other nuances of interaction between control and trust are explained. Barber and MacMahon (2002, this issue) consider group formation that specifies and optimizes the allocation of decisionmaking and action-execution responsibilities for a set of goals among agents within the MAS. They present an analysis of space of decision-making and adaptations in decision-making. This work helps us reason about improvements in organizational capabilities for decision-making and suggests changes in the structure of the organization. Schillo (2002, this issue) explores the relationship between self-organization of multiagent systems and adjustable autonomy in intelligent agents. His analysis pivots on the notion of delegation in order to define organizational relationships. He distinguishes task delegation from social delegation. He draws several organizational models where with increased structure autonomy is diminished. Schreckenghost, Martin, Bonasso, Kortenkamp, Milam, and Thronesbery (2002, this issue) present another real world MAS from a NASA mission. This system supports collaboration among these heterogeneous agents while they operate remotely and communicate asynchronously with multiple humans in the control loop. They identify research issues in groups where members have non-overlapping roles and guided by high level plans. We have made a radical observation that in the space-based operations dynamically reconfiguration of team is not common. This author (Hexmoor 2002, this volume) presents a model that relates autonomy and power. He then goes on to discuss group effects that amplify the individual notions of power and autonomy. An algorithm for task allocation was discussed that showed a use of grouping agents into power groups. This work is step in the direction of defining autonomy and power that applies to a group, i.e., collective autonomy and collective power. Kar, Dutta and Sen (2002, this issue) illustrate effectiveness of probabilistic reciprocity for promoting cooperation among agents. This is an extension of their work on reciprocity between individual agents. In this extension, group members offer opinions about the balance of their past interactions with individuals in other groups. The group must decide based on collective opinions of its members. Group interactions are diminished when agents lie about their balance. However, in certain pattern of group selection lying is ineffective. O’Hare (2002, this issue) explores virtual agent communities and identifies issues that underpin social cohesion. Among other points, he points out the importance of awareness and presence in collaboration. Several domains are examined in robotics, mobile/wearable platforms, and tour guide avatars.
Doc 241 : An autonomous educational mobile robot mediator
So far, most of the applications of robotic technology to education have mainly focused on supporting the teaching of subjects that are closely related to the Robotics field, such as robot programming, robot construction, or mechatronics. Moreover, most of the applications have used the robot as an end or a passive tool of the learning activity, where the robot has been constructed or programmed. In this paper, we present a novel application of robotic technologies to education, where we use the real world situatedness of a robot to teach non-robotic related subjects, such as math and physics. Furthermore, we also provide the robot with a suitable degree of autonomy to actively guide and mediate in the development of the educational activity. We present our approach as an educational framework based on a collaborative and constructivist learning environment, where the robot is able to act as an interaction mediator capable of managing the interactions occurring among the working students. We illustrate the use of this framework by a 4-step methodology that is used to implement two educational activities. These activities were tested at local schools with encouraging results. Accordingly, the main contributions of this work are: i) A novel use of a mobile robot to illustrate and teach relevant concepts and properties of the real world; ii) A novel use of robots as mediators that autonomously guide an educational activity using a collaborative and constructivist learning approach; iii) The implementation and testing of these ideas in a real scenario, working with students at local schools.
Doc 251 : A Scenario-based Framework for Perceiving Crossbreed Service Interactions
The service-oriented computing provides technologies allowing multiple enterprises to integrate their businesses over the Internet. Typical execution behavior in this type of converged schemes comprises a network of autonomous peers interacting with each other. Modeling and analyzing the interactions among different services is a crucial problem in this domain. It is a particularly challenging task since no single party has access to the internal states of all the participants. Desired behaviors have to be specified as constraints on the interactions among different peers since the interactions are the only observable global behavior. Besides, it might be meaningful to specify the interactions among different parties before the services are implemented. Undeniably, one of the main requirements is preserving the autonomy of each participating partner during the interaction, without restricting the overall goals of the common process. Thus, mechanisms orchestrating distributed service workflows are needed. The scenario-based framework for perceiving crossbreed service interactions examined in this paper efforts to meet these requirements.
Doc 262 : The Impact of Computer Mediated Communication on the Processes and Outcomes of Negotiated Transfer Pricing
Abstract New communication technologies are offering organizations options that were not previously available. These new opportunities have the potential to impact many aspects of the organization, including coordination and control efforts. Despite a growing literature concerning computer-mediated communication, the basic question of whether the simple switch from written to computer-mediated communication changes behavior still remains unanswered. We argue that advanced information technology has created a frame with enabling and limiting impacts that exists even when the major characteristics are not present. This paper reports the results of a laboratory experiment concerning the effects of communication medium on the process and outcomes of negotiations in a transfer pricing situation. An organizational coordination mechanism and its interactions with communication medium were also examined. Structured negotiations were communicated through written exchanges and through computerized networks where the media richness and technical system capabilities were the same in both media. The negotiation process and outcomes were both measured. The results show that communication medium does effect both the process and outcomes of negotiations, even when media richness and system capabilities are the same in both media. Computer-mediated communication took fewer rounds of negotiation to reach an agreement, leading to an improved process, but had lower outcomes with lower firm profit and lower perceived autonomy. Thus, these results support our argument that medium provides a frame that affects behavior even when the major characteristics of the medium are absent. Further, medium interacts with arbitration, demonstrating the importance of considering other coordination and control mechanisms when examining the impact of medium.
Doc 283 : Survey-Based Discussions on Morally Contentious Applications of Interactive Robotics
Introduction: As applications of robotics extend to areas that directly impact human life, such as the military and eldercare, the deployment of autonomous and semi-autonomous robots increasingly requires the input of stakeholder opinions. Up to now, technological deployment has been relying on the guidance of government/military policy and the healthcare system without specific incorporation of professional and lay opinion. Methods: This paper presents results from a roboethics study that uses the unique N-Reasons scenario-based survey instrument. The instrument collected Yes, No, Neutral responses from more than 250 expert and lay responders via the Internet along with their ethics-content reasons for the answers, allowing the respondents to agree to previously-provided reasons or to write their own. Data from three questions relating to military and eldercare robots are analyzed qualitatively and quantitatively. Results: The survey reveals that respondents weigh the appropriateness of robotics technology deployment in concert with the level of autonomy conferred upon it. The accepted level of robot autonomy does not appear to be solely dependent on the perceived efficiency and effectiveness of the technology, but is subject to the robot’s relationship with the public’s principle-based reasons and the application field in focus. Conclusion: The N-Reasons instrument was effective in eliciting ethical commentary in a simple, on-line survey format and provides insights into the interactions between the issues that respondents consider across application and technology boundaries.
Doc 307 : Using Tittle’s control balance theory to understand computer crime and deviance
This article studies Tittle’s control balance theory that bases criminal and deviant behaviour on the desire to increase levels of autonomy and uses this to deconstruct Internet crime. It suggests that anonymity and deindividuation are facets of opportunity and means of reducing the likelihood that normal constraints will interfere and questions their position as causative elements of computer mediated crime. It suggests that the control balance theory can help to understand why there may be high levels of computer crime and why individuals who might refrain from deviance in the real world may participate in deviant behaviour on-line. Finally, there are tentative suggestions for policy initiatives as well as a recognition that both these and the application of control balance in this area would benefit from further analysis.
Doc 310 : Human Decision Making Model for Autonomic Cyber Systems
Real-time autonomy is a key element for system which closes the loop between observation, interpretation, planning, and action, commonly found in UxV, robotics, smart vehicle technologies, automated industrial machineries, and autonomic computing. Real-time autonomic cyber system requires timely and accurate decision making and adaptive planning. Autonomic decision making understands its own state and the perceived state of its environment. It is capable of anticipating changes and future states and projecting the effects of actions into future states. Understanding of current state and the knowledge/model of the world are needed for extrapolating actions and deriving action plans. This position paper proposes a hybrid, statistical-formal approach toward achieving realtime autonomy.
Doc 318 : Unbinding Biological Autonomy: Francisco Varela’s Contributions to Artificial Life
To say that artificial life is a young discipline in name only is to exaggerate, but it would be mistaken to think that its goals are new. The marriage of synthetic scientific aims with computational techniques makes artificial life a product of the last fifteen years, but its motivations have much deeper roots in cybernetics, theoretical biology, and the age-old drive to comprehend the mysteries of life and mind. Little wonder that a good part of the work in this field has been one of rediscovery and renewal of hard questions. Other disciplines have sidestepped such questions, often for very valid reasons, or have put them out of the focus of everyday research; yet these questions are particularly amenable to be treated with novel techniques such as computational modeling and other synthetic methodologies. What is an organism? What is cognition? Where do purposes come from? To rediscover and reinvent can be a pleasurable but difficult job. As historians of science know very well, concepts and methods evolve, disfavored theories get buried under successful ones (and not necessarily because they are any less valuable), metaphors and languages change, and social perception and pressures influence the directions of research. In view of this, how fortunate that an exceptional and multifaceted scientist like Francisco Varela has not only provided us with a rich legacy of ideas that, both in content and in perspective, are worthy of serious and active (re-)discovery and exploration, but has also himself been a predecessor and supporter of the field. Concrete examples of his work follow the methods of artificial life, both from when the label did not exist and from afterwards. We also have direct collaborators, many of whom are contributors to this special issue, who worry about many of the same problems as Varela did and whose work is directly connected to research lines in this field. Varela’s key scientific worry was the understanding of biological systems in their full autonomy—neither as a collection of inert components nor as something magical, but as introducing into the universe of physical interactions a special kind of novelty: an autonomously organized system with a formal identity and a point of view. This central worry led him in the 1970s to formulate, together with Humberto Maturana, the theory of autopoiesis, which radically alters the perspective on many biological phenomena by taking seriously (actually by founding itself on) the self-producing nature of bounded metabolic activity. The organism provides us with our primary biological unity, not only as an ontological foundation for biology but, more importantly, from an everyday pragmatic and scientific perspective. Dobzhansky’s famous motto could well be paraphrased as: “Nothing in biology makes sense except in the light of the organism.” The obviousness in the new version is apparent, for understanding what makes an organism remains our problem and is what separates biology from physics. Following the principle of biological autonomy has allowed Varela to formulate radically novel theoretical proposals for key unsolved problems such as the origin of life, the artificial synthesis of minimal cells, the somatic ecology of the extra-cellular
Doc 328 : Towards teaching on managing complexity using complexity management tools
This paper explains the design of the course: Introduction to Systems Thinking. This design uses Complexity Management tools from Organisational Cybernetics in order to teach Systems Thinking in general, and Organisational Cybernetics to Manage Complexity in particular. This design attempts to articulate theories, tools, and practices of systems thinking in a context in which students can develop their autonomy. The implementation of this design is a game, which models a social system. In this way, students learn how to make decisions by themselves in a complex environment.
Doc 329 : Cyberspace and mock apple pie: a vision of the future of graphics and virtual environments
We assume that in the future any user’s display platform can render fantastically complex scenes. Having finally shed the concerns related to the computer graphics medium, developers will concentrate on the message. Content will be key-no longer will users accept nonsensical, artistically vacant environments simply because they’re presented in a head-mounted display. This will also mean that static worlds, no matter how aesthetically pleasing, will come second to environments offering interactive content. The development and provision of dynamic content lie at the heart of the problem we face. For an environment to attract significant and regular participation, it must react in an intelligent and unpredictable fashion. Today, that intelligence can come from only two sources: live human collaboration and computer-generated autonomy. Collaborative VE research combines graphics, networking, human perception, and distributed computing issues. However, these facets betray a disappointing lack of coordination. Computer-generated autonomy (CGA) will certainly become inextricably melded with computer graphics. While this article focuses on other aspects of CVEs, the National Research Council’s report on Modeling and Simulation provides excellent recommendations for future avenues of research in CGA, such as behavior adaptability and human representation. Many of the infrastructure requirements for CGA-enhanced systems with a large number of synthetic actors are the same as those needed for large-scale CVEs.
Doc 335 : Estratégias multidimensionais para a avaliação da aprendizagem em cursos on-line
This work presents the results of a qualitative investigation supported by descriptions from an on-line collaborative specialization course with emphasis on the evaluation process of students’ learning. The investigation showed that, as a result of the possibilities both information technology and communication technology create to people, the evaluation process can not be limited to the logic of a post-procedural and definitive examination, it should rather be applied all time long, in all ways and for all activities and practices in a course environment. This approach has a multidimensional nature and is able to handle with the complexity of the related knowledge, taking into account many possibilities of interaction, different timing and spaces to learn and teach, the related interfaces as well as interdisciplinary approaches in the analyzed collaborative environment. The theoretical references have explored the evaluation concepts and its predominant practice, and examine several aspects of information and communication technology which apply to the education. Based on these two main fronts, other concepts have been considered in the theoretical field such as complexity, collaboration, time, space, interdisciplinary aspects, among others, which have allowed the research to have its basis on a framework of ideas from which the analysis have been brought to light in order to show the broad scope of the multidimensional learning evaluation in on-line collaborative environment, considering that such evaluation allows to set individual goals of students and teachers according to a pedagogical strategy that dialectically focuses the autonomy and the collective construction of knowledge. Another important point demonstrated in this investigation is that people are much more important than technologies, and that the collaboration between people is also important for an assessment of all moments, even considering the history of each participant as an individual who has a personal history and knowledge gathered from multiple sources and being continuously processed.
Doc 339 : Multisensory shared autonomy and tele-sensor programming - Key issues in space robotics
Abstract The long-term goal of our robotics activities has always been based on the idea of relieving man from inhuman and dangerous tasks. While in the early years of robotics our main focus of interest was restricted to designing robot sensors (and sensor-based man-machine interfaces) and closing smart sensory feedback loops, in recent years the activities have widened up considerably. Presently the general goal is the design of a new generation of multisensory light-weight robots for space applications which are operable by astronauts as well as from groundstations, based on powerful telerobotic concepts and man-machine interfaces. This goal is characterized by a high degree of interdisciplinarity and consists of a few major task areas, such as mechatronics (sensory and actuator developments), telerobotics (remote control concepts for space robots), and learning (and self-improvement). In the early years of our robotic activities cooperation with terrestrial industry was predominant, however the last five years were characterized by close cooperation and contracts with space industry. The space robot technology experiment ROTEX — Europe’s first active step into space robotics — was massively based on the concepts and systems developed here (multisensory gripper, local autonomy, telerobotic station); nevertheless in the future there will be considerable effort to transfer technology developed for space (e.g. light-weight concepts) back into terrestrial applications.
Doc 351 : World in Torment: A Time Whose Idea Must Come
Presents the full text of the Presidential Address by Stafford Beer to the Triennial Congress of the World Organization of Systems and Cybernetics, New Delhi, India, January 1993. Introduces the components of contemporary change and discusses the diagnostic approach of management cybernetics. Outlines a summary theory of autonomy and considers autonomy at the global level. Offers an account of the cybernetics of chronic societary triage, developing an analysis of triage through category A, B and C partition. Produces a summary theory of team syntegrity and discusses the power and the use of the team syntegrity model. Finally, outlines an action plan for World Syntegration.
Doc 366 : Challenges and Research Directions in Medical Cyber–Physical Systems
https://doi.org/10.1109/jproc.2011.2165270 Insup Lee Oleg Sokolsky Sanjian Chen John Hatcliff Eunkyoung Jee BaekGyu Kim Andrew King Margaret Mullen-Fortino Soojin Park Alexander Roederer Krishna K. Venkatasubramanian
Medical cyber-physical systems (MCPS) are life-critical, context-aware, networked systems of medical devices. These systems are increasingly used in hospitals to provide high-quality continuous care for patients. The need to design complex MCPS that are both safe and effective has presented numerous challenges, including achieving high assurance in system software, intoperability, context-aware intelligence, autonomy, security and privacy, and device certifiability. In this paper, we discuss these challenges in developing MCPS, some of our work in addressing them, and several open research issues.
Doc 367 : Agents for process coherence in virtual enterprises
Open environments such as the Internet— and even corporate intranets— enable a large number of interested parties to use and enhance vast quantities of information. These environments support modern applications, such as manufacturing, virtual enterprises, and ubiquitous information access, which involve a number of information sources and component activities. However, without principled techniques to coordinate the various activities, any solutions would yield disjointed and error-prone behavior, while consuming excessive effort to build and maintain. The agent metaphor, long in study in artificial intelligence, has recently become popular in mainstream computing, largely due to its suitability for open environments. Agents can be thought of as active objects with some special properties tailored to open environments. For our purposes, the key aspects of agents are their autonomy and abilities to perceive, reason, and act in their environment, as well as to socially interact and communicate with other agents [7]. When agents interact with one another they form a multiagent system. As part of a multiagent system, agents can capture and apply the semantic constraints among heterogeneous components in order to enact distributed workflows. Autonomy is critical in open environments. Consider a manufacturing scenario requiring supply-chain coordination. It is natural to model independent companies in a supply chain as represented by autonomous agents. But, at first sight, autonomy is a mixed blessing if the companies behaved arbitrarily, the supply chain would break. Consequently, our main technical challenge is to manage autonomy that is, how to maximize freedom without letting it devolve into chaos. We propose that the main basis for managing autonomy lies in the notion of commitments. A flexible formulation of commitments can provide a natural means through which autonomous agents may voluntarily constrain their behavior. By flexible, we mean that it should be possible to cancel or otherwise modify the commitments. Consider a situation where a purchaser is trying to obtain some parts from a vendor. We would like the vendor to commit to delivering the parts of the right quality to the purchaser. However, it is important that the supply chain be able to survive exceptions such as when the manufacturing plant goes down in an earthquake, or when the purchaser decides that it needs the parts to be of a lower error tolerance than initially ordered. Information cannot be understood independently of the processes that create or consume it. The desired flexibility of behavior and the ability to recover from failures require an approach that is sensitive to how those processes interact. We show that when agents are associated with each independent process, our flexible notion of commitments can capture the desired interactions among those processes
Doc 368 : A Nodal Approach to Modeling Human-Agents Collaboration
In this paper, we present the concept of a node which consists of a human actor, one or more agents, and their combined functions to represent a collective intelligent entity. Basically, the instantiation of nodes with diverse pre-defined functions in a workflow process could represent a domain in which humans interact with other humans via software agents in a collaborative environment to achieve some common goal. Here, the agents‟ functions supplement the demands of the corresponding human‟s pre-defined functions. As a part of this research, a survey is conducted to determine generalized functions of humans and agents in a node. The aim is to solicit information pertaining to humans‟ daily tasks and the kind of assistance they would prefer to have to ease those tasks. The tasks entail communicating with people, using several devices and/or media such as Document, Email, Phone, and SMS. This paper proposes a Nodal Approach (NA) to simplified modeling of humans and software agents with their predefined functions for collaboration. An example user application is developed and tested involving several academician functions assisted by their corresponding software agents. General Terms Collaboration, Environment, Behaviors, Autonomy, Models.
Doc 390 : Adding intelligent agents to existing EI frameworks
The global marketplace and the Internet have served as catalysts for enterprise integration (EI), both within a company and between a company and its suppliers and partners. EI software architectures, or frameworks, are often built on top of standard middleware and typically lack the ability to function in dynamic environments where flexibility, adaptability, and knowledge management are crucial. Furthermore, they do not scale well to wide-area enterprises that extend across organizational boundaries. These frameworks nevertheless offer a cost-effective basis for resource management, user coordination, knowledge exchange, and information extraction in a virtual enterprise. The paper considers how, by incorporating agents with their inherently distributed characteristics of autonomy, reasoning or intelligence, and goal-driven behavior, existing EI frameworks can be enhanced to support adaptive virtual enterprises.
The usage of Internet in commercial activities is starting to emerge in the world: Commercial activities via Internet are gradually changing the daily practice and future possibilities. This opens many new possibilities, opportunities, and challenges in electronic trading. Electronic trading lowers cost of trading and improves trading efficiency. Marketplaces typically consist of autonomous parties that interact with each other in several ways. Examples are product advising, sales negotiation, information providing, etc. In daily life, the number of parties and products is limited, due to space and time constraints. In E-trading, such constraints do not need to exist, and a wide variety of suppliers, products, and clients become readily available. Agent technology can enhance electronic market places by using the autonomy and flexibility features of the software agents: a software agent is a piece of software that is autonomous, and that is owned by some party. In this paper we review the uses of agent technology in Electronic trading. The agent technology offers following advantages:servicecustomization, adaptability and supports Component Based Software Engineering (CBSE).
Doc 398 : Social media platforms as educational interfaces - considering the way forward for blended learning
Despite their pervasiveness, Virtual Learning Environments (VLEs) are subject to a number of criticisms of their efficacy as platforms for blended learning (Weller, 2006). This paper considers the use of social media platform Padlet as an alternative educational interface through evaluation of an intervention-based case-study. Focus groups were conducted with the student cohort, identifying a number of recurring themes that form the main body of this paper: visuality and clarity of the interface, autonomy over learning, students as co-producers and critical reflection. These themes address the intervention itself, and acknowledge the limitations of the research by proposing potential developments. The paper concludes by drawing together questions for further research, and notably reflects on whether it is more pertinent to address the way we use educational interfaces, rather than the interfaces themselves, in considering ‘the way forward’ for blended learning in Higher Education 2018.
Doc 400 : A Communication Robot in a Shopping Mall
This paper reports our development of a communication robot for use in a shopping mall to provide shopping information, offer route guidance, and build rapport. In the development, the major difficulties included sensing human behaviors, conversation in a noisy daily environment, and the needs of unexpected miscellaneous knowledge in the conversation. We chose a network-robot system approach, where a single robot’s poor sensing capability and knowledge are supplemented by ubiquitous sensors and a human operator. The developed robot system detects a person with floor sensors to initiate interaction, identifies individuals with radio-frequency identification (RFID) tags, gives shopping information while chatting, and provides route guidance with deictic gestures. The robot was partially teleoperated to avoid the difficulty of speech recognition as well as to furnish a new kind of knowledge that only humans can flexibly provide. The information supplied by a human operator was later used to increase the robot’s autonomy. For 25 days in a shopping mall, we conducted a field trial and gathered 2642 interactions. A total of 235 participants signed up to use RFID tags and, later, provided questionnaire responses. The questionnaire results are promising in terms of the visitors’ perceived acceptability as well as the encouragement of their shopping activities. The results of the teleoperation analysis revealed that the amount of teleoperation gradually decreased, which is also promising.
Doc 420 : How Does My Research Question Come About? The Impact of Funding Agencies in Formulating Research Questions
It is a well-known post-positivist myth that the empirical researcher can be neutral politically and ethically with his/her field, and even many qualitative researchers somehow agree with this statement, since neutrality seems to guarantee inquiry’s rigor, trustworthiness, and legitimacy (Diebel, 2008). The researcher is supposed to be politically independent in designing the research project and ethically detached from the participants. The paradigm turn (Lincoln & Guba, 1985; Denzin & Lincoln, 2005) and especially the critical theory paradigm has powerfully criticized this assumption from a political perspective (Kincheloe & McLaren, 2005). In particular, many scholars have argued that locating funding for qualitative research is a political process (Cheek, 2005; Roth, 2002). However, what I would like to illustrate here is that the funding body has also a methodological impact of which the researcher should be aware and should learn to deal with. Few years ago, I was funded by a broadcasting company associated with the former Italian premier Silvio Berlusconi. This circumstance made me very concerned and unsettled for many reasons. But this was also the reason for taking very seriously the problem of the ethical (and political) underpinnings of funded qualitative research. Doing research under these circumstances made me understandably worried about the possibility to carry out my research with the autonomy I wanted. The research topic–television experience of 3-6 years old children–was very controversial and a heated debate divided, and still is dividing, professionals, academics, NGOs in two opposite perspectives on the role of TV viewing for little children. And I actually realized that the impact of funding agencies also has methodological consequences on the research practice. After a brief review of some main positions about the issue of the development of the research topic and the research question, I will examine the sources from where a topic or a general problem comes about, and then how it can be shaped into a workable research question. In this passage, I identify four specific constraints. In this paper, however, I will deepen only one of them: the funder-related constraints. I will illustrate the methodological impact of funding agencies by presenting a case of a research on the use of television in families, funded by a mixed board of television industries and children’s advocate groups. Then, I will show four possible solutions to the methodological problem of funding related to four different paradigms, and, finally, I will present what we actually did in our research to deal with this issue. 1. Choosing a Topic Choosing a research topic (or problem) in qualitative inquiry and establishing a workable research question are two different, but interrelated processes, and it is difficult to define which one comes first or after (Miles & Huberman, 1994). Researchers are always embodied in the world they wish to explore and their stance is never neutral or separate from the world in which they live. So the research question that one cuts off from the whole topic is difficult to isolate like a figure on the background. However, in the practice of research the passage from the first step to the second step is necessary, and represents one of the most difficult processes, especially (but not only) for novices (Silverman, 2000). Traditionally, every research comes from a topic that then requires to be transformed (or reduced) into an experimental hypothesis or a focused research question. Quantitative research calls this delicate passage operationalization–the transformation of abstract concept in something measurable. By the way at the beginning of the 1970s even some qualitative methodologists talked about operationalization too (Schatzman & Strauss, 1973, p. 101; see also Denzin, 1971). Before examining the problems in qualitative research that are also related to the transformation/reduction of a concept into a workable research question, it is interesting to see how a research theme emerges and how one considers a research area as relevant. …
This article introduces the notion of ‘routinisation’ into discussions of informed consent. It is argued that the routinisation of informed consent poses a threat to the protection of the personal autonomy of a patient through the negotiation of informed consent. On the basis of a large survey, we provide evidence of the routinisation of informed consent in various types of interaction on the internet; among these, the routinisation of consent to the exchange of health related information. We also provide evidence that the extent of the routinisation of informed consent is dependent on the character of the information exchanged, and we uncover a range of causes of routinisation. Finally, the article discusses possible ways of countering the problem of routinisation of informed consent.
Doc 433 : The hardware and software aspects of energy consumption in the mobile development platform
Purpose – The set of services provided by the mobile phone platform, is becoming increasingly complex and requiring more computing power, hence higher energy consumption, and compromising the autonomy of these devices. The purpose of this paper is to identify scenarios where methods could be applied to reduce such consumption and extend the mobile autonomy.Design/methodology/approach – This mobile evolution has given rise to a lot of energy saving research activities, which mainly focus on the hardware side of computational systems. However, it is tempting to suppose that only hardware dissipates power, not software. This paper characterizes several hardware and software scenarios, which could be explored to develop energy‐efficient mobile techniques.Findings – From this analysis, the authors argue that the development of applications that consider energy saving as one of their requirements, can result in a significant final energy saving because solutions will be part of their own software and they do no…
Doc 439 : Role-based Interaction Infrastructures for Internet Agents
SUMMARY With no doubt the Internet will achieve advantages in exploiting software agents for applications, thanks to their autonomy in carrying out tasks. In such a scenario, appropriated methodologies are needed to manage the interactions among agents. The BRAIN framework proposes an interaction model based on roles, each one composed by a set of capabilities and an expected behavior. The achieved advantages are agentoriented features, separation of concerns and reuse of solutions and experiences. In this paper we present two interaction infrastructures, Rolesystem and RoleX, which rely on the above mentioned role-based interaction model. These infrastructures allow agents to assume roles and to interact. An application example and the comparison with other approaches show the efiectiveness
Doc 444 : The theoretical basis of the effective school improvement model (ESI)
This article describes the process of theoretical reflection that preceded the development and empirical verification of a model of “effective school improvement”. The focus is on basic mechanisms that could be seen as underlying “getting things in motion” and change in education systems. Four mechanisms are distinguished: synoptic rational planning, the market mechanism, cybernetics, and autopoiesis. Principles relevant for effective school improvement that are deducted from these basic mechanisms are: goal setting for improvement, pressures to improve, cyclical improvement processes, and autonomy. The article also briefly touches upon the way empirical models of school effectiveness and school improvement can be linked and used in the encompassing model of effective school improvement that the ESI project has yielded.
The proliferation of the Internet has given opportunities on different entities to share resources or conduct business transactions. However, how to establish trust among strangers without prior relationship and common security domain poses much difficulty for these activities. To resolve these problems, a promising approach known as Automated Trust Negotiation (ATN), which establishes the trust between strangers with iterative disclosure of credentials and access control policies, is proposed. In this paper, a comprehensive survey of research on ATN is presented, and some basic techniques, e.g. negotiation model and architecture, access control policy specification, credential description and credential chain discovery, are introduced and compared. Then based on the analysis of the shortcomings and problems of the techniques, the trend of research and application is discussed. All these work may contribute to the further work on trust establishment for entities with privacy protection and autonomy in open internet.
Doc 453 : Multidimensional strategies for learning evaluation in on-line courses
This work presents the results of a qualitative investigation supported by descriptions from an on-line collaborative specialization course with emphasis on the evaluation process of students’ learning. The investigation showed that, as a result of the possibilities both information technology and communication technology create to people, the evaluation process can not be limited to the logic of a post-procedural and definitive examination, it should rather be applied all time long, in all ways and for all activities and practices in a course environment. This approach has a multidimensional nature and is able to handle with the complexity of the related knowledge, taking into account many possibilities of interaction, different timing and spaces to learn and teach, the related interfaces as well as interdisciplinary approaches in the analyzed collaborative environment. The theoretical references have explored the evaluation concepts and its predominant practice, and examine several aspects of information and communication technology which apply to the education. Based on these two main fronts, other concepts have been considered in the theoretical field such as complexity, collaboration, time, space, interdisciplinary aspects, among others, which have allowed the research to have its basis on a framework of ideas from which the analysis have been brought to light in order to show the broad scope of the multidimensional learning evaluation in on-line collaborative environment, considering that such evaluation allows to set individual goals of students and teachers according to a pedagogical strategy that dialectically focuses the autonomy and the collective construction of knowledge. Another important point demonstrated in this investigation is that people are much more important than technologies, and that the collaboration between people is also important for an assessment of all moments, even considering the history of each participant as an individual who has a personal history and knowledge gathered from multiple sources and being continuously processed.
Doc 454 : Practicing what we teach in teaching systems practice: The action-learning cycle
Respect for autonomy is a powerful tool for managing complexity. It lets natural, mutually supportive order emerge. In Western culture, though, much order is imposed. This causes conflict, which only increases complexity. This conflictual pattern has an antidote in systems practice: the systemic action–learning cycle. When used reflectively at the level of second-order cybernetics, this cycle embodies respect for autonomy. The UK Open University course T306: Managing Complexity—A Systems Approach teaches this action–learning cycle, and uses the cycle in its own teaching. In particular, it uses the cycle in its online conferences, to invite participation and dissolve conflict. This paper shows how.
Doc 461 : Informed consent among analog people in a digital world
Abstract This paper addresses the concept of informed consent when working with remote, non-literate groups. By examining both the legal and moral obligations of informed consent, it will be argued that “erring on the side of caution”, for instance by not publishing on the Internet because the consultants/community do not have exposure to such things, is just as paternalistic as assuming that they would consent if they understood. It is further argued that the researcher has an obligation to explain the research to the consultants/community as fully as possible and to engage in an ongoing negotiation of consent, but that the researcher must respect the autonomy of the consultant/community decision, even if the consent was not fully “informed”.
Doc 539 : Personality, Generalized Self-Efficacy & Team Performance: A Study of Rural Development Teams
Introduction Teams in organizations are specialized forms of work-groups which are characterized by strong interdependence, autonomy, and shared responsibilities (Guzzo & Dickson 1996). Work teams are being increasingly used in a diverse range of organizations with positive results (Emery & Barker 2007, Neuman & Wright 1999). Researchers have identified cognitive, motivational, and behavioural processes and emergent states that are essential to the effective performance of teams (Kozlowski & Ilgen 2006). Personalities of team members have been found to have considerable influence on group processes (Barrick et. al. 1998, Frederick et. al. 2005, Suzanne 2007). Similarly, self-efficacy beliefs have been found to be robust predictors of learning in training and performance in a wide range of situations (Gist 1987, Stajkovic & Luthans 1998). Team member self-efficacy affects decision making in teams (Jeffery et. al 2003), group cohesion (Pillai & Willaims 2004), and outcome expectancy (Stone & Bailey 2007). However, most research on teams focus on industrial and commercial organizations in the developed nations. Following liberalization of the Indian economy, working in teams has become an imperative (Tata & Prasad 2004). It has been argued that to survive in the competitive environment, among other changes, Indians should develop a team based work culture as it is an essential ingredient for higher levels of performance and innovation in organizations (Joshi 2001). Creating high-performing self-managed teams have been identified as a major behavioural training need to fulfil the requirements of Indian organizations in the next fifteen years (Sanghi 2003). However, Gupta (2002) has observed that due to ethnic plurality, importance of hierarchy, perceived in-group and out-group differences, Indians will resist working in teams. Poor teamwork has been identified as a major issue among the staff members in primary healthcare centres in South India (Nichter 2002). Teamwork is necessary for most development programs as it needs multiple skills to manage any development project ranging from healthcare, education or income generation projects. However, the all encompassing heterogeneity (Gupta 2002) and cultural factors (Kumar 2004) has been argued as a major threat to the self-managed work teams in Indian organizations. In a significant study of Indian managers Pearson and Chatterjee (2001) found that within a brief period, following the liberalization of Indian economy, managers in India have adopted values of teamwork and other necessary qualities needed to remain competitive in an open market; while subjugating societal qualities reinforced over hundreds of years. Similarly, studies in automobile sector have pointed out that, because of the desire to become world class manufacturers following deregulation in the early 90s, some Indian automobile manufacturers have transformed themselves into world-class organizations by creating a culture of teamwork (Dangayach & Deshmukh 2001, Som 2006). In a case study of a government organization, it was found that excellent self-managed teams were created due to the role played by an extra-ordinary leader (Dwivedi 2006). Effective teamwork in an Indian manufacturing organization had significant impact in reducing wastage (Anand 1993). However, there are only few studies which have focused on teamwork in Indian organizations and all of them were on commercial organizations. In a developing country like India a large number of not-for-profit organizations are engaged in rural development projects for the amelioration of poverty and these organizations are largely funded on project basis by donor organizations (Mishra et. al. 2005). People in these organizations work as members of rural development project teams (Duraisingam & Dollard 2005). Interventions in the areas of livelihood, watershed development, health and sanitation, education, and other income generation projects in rural areas have largely been carried out by not-for-profit organizations with or without direct support of other government agencies. …
Doc 544 : Trusted Autonomy and Cognitive Cyber Symbiosis: Open Challenges
This paper considers two emerging interdisciplinary, but related topics that are likely to create tipping points in advancing the engineering and science areas. Trusted Autonomy (TA) is a field of research that focuses on understanding and designing the interaction space between two entities each of which exhibits a level of autonomy. These entities can be humans, machines, or a mix of the two. Cognitive Cyber Symbiosis (CoCyS) is a cloud that uses humans and machines for decision-making. In CoCyS, human-machine teams are viewed as a network with each node comprising humans (as computational machines) or computers. CoCyS focuses on the architecture and interface of a Trusted Autonomous System. This paper examines these two concepts and seeks to remove ambiguity by introducing formal definitions for these concepts. It then discusses open challenges for TA and CoCyS, that is, whether a team made of humans and machines can work in fluid, seamless harmony.
This paper critiques the idea of full autonomy, as illustrated by Oxford University’s Robotcar. A fully autonomous driverless car relies on no external inputs, including GPS and solely learns from its environment using learning algorithms. These cars decide when they drive, learn from human drivers and bid for insurance in real time. Full autonomy is pitched as a good end in itself, fixing human inadequacies and creating safety and certainty by the elimination of human involvement. Using the ACTIVE ethics framework, an ethical response to the fully autonomous driverless cars is developed by addressing autonomy, community, transparency, identity, value and empathy. I suggest that the pursuit of full autonomy does not recognise the essential importance of interdependencies between humans and machines. The removal of human involvement should require the driverless car to be more connected with its environment, drawing all the information it can from infrastructure, internet and other road users. This requires a systemic view, which addresses systems and relationships, which recognises the place of driverless cars in a connected system, which is open to the study of complex relationships, both networked and hierarchical.
Doc 556 : Bounded Rationality Through the Filter of the Lisbon Objectives
Information and Communication Technologies (ICT) have created best conditions for grows of knowledge societies. An emerging global information society serves to building global knowledge societies as source for further development. Conventional paradigms of sciences starts to be more blemish and prone to redefinition of there foundations, understood as scientific knowledge. The perspective of knowledge and ideals of rationality are both heavily influenced by a new contemporary scientific thinking, through tools, inherent of autonomy and uncertainty. A new understanding of the world in terms of open dynamic heterogeneous uncertain systems is needed. Among the conclusions: classical rational reasoning is mainly aiming at effectiveness, not at uncertain knowledge processing, because of its temporality (mainly its ineffectiveness in dealing with future events); a bounded-rationality approach enables both, better economic models and better modelling, being based on trends in economic modelling as well as on agent-oriented software engineering.
Doc 558 : Trust in social computing. The case of peer-to-peer file sharing networks
Social computing and online communities are changing the fundamental way people share information and communicate with each other. Social computing focuses on how users may have more autonomy to express their ideas and participate in social exchanges in various ways, one of which may be peer-to-peer (P2P) file sharing. Given the greater risk of opportunistic behavior by malicious or criminal communities in P2P networks, it is crucial to understand the factors that affect individual’s use of P2P file sharing software. In this paper, we develop and empirically test a research model that includes trust beliefs and perceived risks as two major antecedent beliefs to the usage intention. Six trust antecedents are assessed including knowledge-based trust, cognitive trust, and both organizational and peer-network factors of institutional trust. Our preliminary results show general support for the model and offer some important implications for software vendors in P2P sharing industry and regulatory bodies.
Doc 572 : A Review of Theoretical and Practical Challenges of Trusted Autonomy in Big Data
Despite the advances made in artificial intelligence, software agents, and robotics, there is little we see today that we can truly call a fully autonomous system. We conjecture that the main inhibitor for advancing autonomy is lack of trust. Trusted autonomy is the scientific and engineering field to establish the foundations and ground work for developing trusted autonomous systems (robotics and software agents) that can be used in our daily life, and can be integrated with humans seamlessly, naturally, and efficiently. In this paper, we review this literature to reveal opportunities for researchers and practitioners to work on topics that can create a leap forward in advancing the field of trusted autonomy. We focus this paper on the trust component as the uniting technology between humans and machines. Our inquiry into this topic revolves around three subtopics: 1) reviewing and positioning the trust modeling literature for the purpose of trusted autonomy; 2) reviewing a critical subset of sensor technologies that allow a machine to sense human states; and 3) distilling some critical questions for advancing the field of trusted autonomy. The inquiry is augmented with conceptual models that we propose along the way by recompiling and reshaping the literature into forms that enable trusted autonomous systems to become a reality. This paper offers a vision for a Trusted Cyborg Swarm, an extension of our previous Cognitive Cyber Symbiosis concept, whereby humans and machines meld together in a harmonious, seamless, and coordinated manner.
Doc 586 : An Effective Decentralized Organization Mechanism for Resources in Grid
To effectively organize and schedule resources in the Internet is a critical problem in grid computing. An effective organizing mechanism based on the spaces model is proposed. The novel mechanism divides resources into different area autonomy systems that can collaborate with each other. Multiple distributed space services can availably aggregate neighboring resources in an area autonomy system. Common interfaces are offered to operate information in spaces. The mechanism has been applied to build up an enterprise grid computing platform in a group company and it is found that the mechanism can work well.
The proliferation of the Internet has given opportunities on different entities to share resources or conduct business transactions. However, how to establish trust among strangers without prior relationship and common security domain poses much difficulty for these activities. To resolve these problems, a promising approach known as Automated Trust Negotiation (ATN), which establishes the trust between strangers with iterative disclosure of credentials and access control policies, is proposed. In this paper, a comprehensive survey of research on ATN is presented, and some basic techniques, e.g. negotiation model and architecture, access control policy specification, credential description and credential chain discovery, are introduced and compared. Then based on the analysis of the shortcomings and problems of the techniques, the trend of research and application is discussed. All these work may contribute to the further work on trust establishment for entities with privacy protection and autonomy in open internet.
Doc 610 : Application of Mobile Agent in E-commerce
Electronic commerce develops rapidly for its advantages of convenience, fast, low cost, but it also exists many problems such as: information inquiry cumbersome, without autonomy, cyber source a serious waste, system load large, low efficiency, does not support offline and multi task operation etc.. The mobile Agent into the e-commerce can solve above problems effectively. This paper presents a electronic commerce application system model based on mobile Agent to solve many problems existing in current electronic commerce system.
Doc 672 : Situation Monitoring Model and Implementation Technique for Autonomous Web Services
In order to deal with open and dynamic Internet environment,the deployed services should be enriched in autonomy,and their situation should be provided with effective monitoring and management,so as to enable flexible application and dynamic adaptation of autonomous Web services.We proposed and realized the situation model and monitoring architecture of autonomous Web services by extending traditional SOA with autonomy.We also designed and implemented corresponding software platform,and conducted a case study to validate the feasibility of our approach.
Doc 691 : Emergence and Downward Causation in Contemporary Artificial Agents: Implications for their Autonomy and Some Design Guidelines.
Contemporary research in artificial environments has marked the need for autonomy in artificial agents. Autonomy has many interpretations in terms of the field within which it is being used and analyzed, but the majority of the researchers in artificial environments are arguing in favor of a strong and life-like notion of autonomy. Departing from this point the main aim of this paper is to examine the possibility of the emergence of autonomy in contemporary artificial agents. The theoretical findings of research in the areas of living and cognitive systems, suggests that the study of autonomous agents should adopt a systemic and emergent perspective for the analysis of the evolutionary development of the notions/properties of autonomy, functionality, intentionality and meaning, as the fundamental and characteristic properties of a natural agent. An analytic indication of the functional emergence of these concepts and properties is provided, based on the characteristics of the more general systemic framework of second-order cybernetic and of the interactivist framework. The notion of emergence is a key concept in such an analysis which in turn provides the ground for the theoretical evaluation of the autonomy of contemporary artificial agents with respect to the functional emergence of their capacities. The fundamental problems for the emergence of genuine autonomy in artificial agents are critically discussed and some design guidelines are provided.
Doc 717 : Concept maps: A tool for knowledge management and synthesis in web-based conversational learning.
Web-based conversational learning provides an opportunity for shared knowledge base creation through collaboration and collective wisdom extraction. Usually, the amount of generated information in such forums is very huge, multidimensional (in alignment with the desirable preconditions for constructivist knowledge creation), and sometimes, the nature of expected new information may not be anticipated in advance. Thus, concept maps (crafted from constructed data) as summary tools may be a solution to improve critical thinking and learning by making connections between the facts or knowledge shared by the participants during online discussion This exploratory paper begins with the description of this innovation tried on a web-based interacting platform (email list management software), FAIMER-Listserv, and generated qualitative evidence through peer-feedback. This process description is further supported by a theoretical construct which shows how social constructivism (inclusive of autonomy and complexity) affects the conversational learning. The paper rationalizes the use of concept map as mid-summary tool for extracting information and further sense making out of this apparent intricacy.
Doc 722 : Mastering Complexity with Autonomous Production Processes
Abstract For the consolidation and improvement of a companies market position it is necessary to master the increased complexity of production processes with suitable methods. This paper will examine whether and how far autonomous production processes are suitable to master the complexity of production processes. The paper starts with an introduction of the problem definition followed by an explanation of theoretical foundations of complexity in production, autonomy and cyber-physical production systems. In addition, selected already existing methods to master complexity are presented. The second part of the paper starts with an introduction into measuring the degree of autonomy in production processes which is the basis for the following simulation-based analysis. Afterwards, the simulation environment is presented. The third chapter is about the experimental analysis of the presented research question. Therefor, the experimental set up and the implementation are presented. The paper ends with an outlook on further evaluation activities.
Doc 733 : How Can Polycentric Governance of Spectrum Work
Spectrum policy in the US (and throughout most of the world) consists generally of a set of nationally determined policies that apply uniformly to all localities. However, it is also true that there is considerable variation in the features (e.g., traffic demand or population density), requirements and constraints of spectrum use on a local basis. Global spectrum policies designed to resolve a situation in New York City could well be overly restrictive for communities in central Wyoming. At the same time, it is necessary to ensure that more permissive policies of central Wyoming would not create problems for NYC (by ensuring, for example, that relocated radios adapt to local policies). Notions of polycentric governance that have been articulated by the late E. Ostrom argue that greater good can be achieved by allowing for local autonomy in resource allocation. Shared access to spectrum is generally mediated through one of several technologies. As Weiss, Altamimi and Liu show, approaches mediated by geolocation databases are the most cost effective in today’s technology. In the database oriented Spectrum Access System, or SAS, proposed by the FCC, users are granted (renewable) usage rights based on their location for a limited period of time. Because this system grants usage rights on a case-by-case basis, it may also allow for greater local autonomy while still maintaining global coordination. For example, it would be technically feasible for the database to include parameters such as transmit power, protocol, and bandwidth. Thus, they may provide the platform by which polycentric governance might come to spectrum management. In this paper, we explore, through some case examples, what polycentric governance of spectrum might look like and how this could be implemented in a database-driven spectrum management system.The technical architecture envisions a geo-located and networked radio that controls the protocols and transmission parameters of the radios associated with it. This appliance (and the associated radios) might be owned and operated by a landlord, a farmer, an Internet service provider, etc. The operating parameters of the radios, such as power, bandwidth, protocols (e.g. waiting times in WiFi) would gradually become more permissive until an enforceable event occurred. At this point, the parameters would become more restrictive. Note that this is consistent with the notion of graduated sanctions of the CPR literature.Enforceable events would be resolved between appliances through an automated negotiation protocol. It may also involve fusing authenticated data regarding the enforceable event. If the frequency of enforceable events increased above a threshold, a super local (or regional) coordinating device would be invoked to optimize use of the spectrum. In this way, spectrum governance is nested, providing resilience against a single point of failure in the governance process.This represents a means to learn from the local knowledge and policy changes, which results in a system that is more responsive to environment threats at multiple scales. In the end, we have a system that can compensate for the failure of some units with the successful response of others. This can be an advantage with respect to global policy mechanisms. Thus spectrum management would be best modelled as an emergent phenomenon rather than a top down system. This paper will describe the key details of this system and present some initial modelling results in comparison with the traditional global model of spectrum regulation. It will also address some of the concerns with this approach.
Doc 734 : Determinants of an Appropriate Degree of Autonomy in a Cyber-physical Production System
Abstract Classical productions systems are migrating step-by-step into cyber-physical production systems. The addition of much more computing power and object-bound data storage will lead to new possibilities for the advancement of autonomy in production systems. Autonomous message exchange and coordination can help to prevent quality problems (for instance wrong pairing of tool and work piece) and improve the disturbance management (for instance by faster information about current and probable disturbances). Due to the fact that nearly all improvements of existing production systems with cyber-physical systems take place in real and active manufacturing sites, on-site experiments for determining an appropriate degree of autonomy for production objects are not feasible. Therefore, a lab approach is necessary. In this contribution a hybrid lab approach to simulate various degrees of autonomy is presented [1] . The paper starts with a definition of autonomy and suggests diverse measurement methods [2] . After a short introduction into the lab concept, the results of some test runs are presented where autonomous objects perform the same production program as “dumb” production objects. Finally, an outlook for further research is given.
Doc 753 : Internet of things applied on a supervisory system for modular production system stations
The advent of new technologies gives the companies the possibility to create a strong competitive position by successfully use the recent concepts. New ideals brought the concepts that represents the future fourh industrial revolution, among them is the Internet of Things, which when properly applied, enables systems to have a greater autonomy by the decision taking from data collected from the Internet and the process itself. Considering that most applications of this concept are related to home automation, this article demonstrates the use of Internet of Things in an industrial process represented in laboratory scale, monitored in real time by a supervisory system that initiates production from the demand information simulated by a Web page.
Doc 769 : Cyber-Maritime Cycle: Autonomy of Marine Robots for Ocean Sensing
Marine robots are playing important roles in environmental sensing andocean observation applications. This tutorial introduces the overall systemsarchitecture and patterns for data streams that enable autonomyfor marine robots in environmental sensing applications. The articleproposes the concept of cyber-maritime cycle and surveys its use as arecent development in marine robotics. Supported by communicationnetworks, autonomy can be achieved using at least three feedback loopsin a cyber-maritime cycle, each running at different time scales or temporalfrequencies. When information is circulating around the cycle, itis transformed between two representations: the Lagrangian view andthe Eulerian view. Important functional blocks, such as mission planning,path planning, data assimilation, and data-driven modeling arediscussed as providing conversions between the two views of data. Thetutorial starts with an overview of enabling technologies in sensing,navigation, and communication for marine robotics. The design of experimentmethod is then reviewed to plan optimal sensing locations forthe robots. The tutorial discusses a class of path planning methods thatproduces desired trajectories of marine robots while combating oceancurrent. The lack of an accurate Eulerian map for ocean current willlead to tracking error when robots attempt to follow the planned pathsto collect Lagrangian data. The performance of robot navigation can beevaluated through the controlled Lagrangian particle tracking method,which computes trends and bounds for the growth of the tracking error.To improve the accuracy of the Eulerian map of ocean current, adata-driven modeling approach is adopted. Data assimilation methodsare leveraged to convert Lagrangian data into Eulerian map. In addition,the spatial and temporal resolution of Eulerian data maps canbe further improved by the motion tomography method. This tutorialgives a comprehensive view of data streams and major functional blocksunderlying autonomy of marine robots.
Doc 780 : Determination of the Optimal Degree of Autonomy in a Cyber-Physical Production System
Abstract Classical productions systems are migrating step-by-step into cyber-physical production systems. The addition of much more computing power and object-bound data storage will lead to new possibilities for the advancement of autonomy in production systems. Autonomous message exchange and coordination can help to prevent quality problems (for instance wrong pairing of tool and work piece) and improve the disturbance management (for instance by faster information about current and probable disturbances). Due to the fact that nearly all improvements of existing production systems with cyber-physical systems take place in real and active manufacturing sites, on-site experiments to find out the right degree of autonomy for production objects are not suitable. Therefore a lab approach is necessary. In this contribution a hybrid lab approach to simulate various degrees of autonomy is presented [1] . The paper starts with a definition of autonomy and suggests measurement methods [2] . After a short introduction into the lab concept the results of some test runs are presented where autonomous objects perform the same production program as dumb production objects. Finally, an outlook for further research is given.
Doc 781 : LISA — A Mechatronic Wall for Assistance with ADLs
The research project LISA (Living Independently in Sudtirol Alto-Adige) investigated the possibilities of embedding assistive functions, systems, and services into wall “terminal” components that enable and support autonomy and independence with respect to Activities of Daily Living (ADLs), and which generate structured environments called Robotic micro-Rooms (RmRs).
As autonomous and semiautonomous systems are developed for automotive, aviation, cyber, robotics and other applications, the ability of human operators to effectively oversee and interact with them when needed poses a significant challenge. An automation conundrum exists in which as more autonomy is added to a system, and its reliability and robustness increase, the lower the situation awareness of human operators and the less likely that they will be able to take over manual control when needed. The human-autonomy systems oversight model integrates several decades of relevant autonomy research on operator situation awareness, out-of-the-loop performance problems, monitoring, and trust, which are all major challenges underlying the automation conundrum. Key design interventions for improving human performance in interacting with autonomous systems are integrated in the model, including human-automation interface features and central automation interaction paradigms comprising levels of automation, adaptive automation, and granularity of control approaches. Recommendations for the design of human-autonomy interfaces are presented and directions for future research discussed.
Doc 784 : Toward a Petri Net Based Model to Control Conflicts of Autonomy between Cyber-Physical & Human-Systems
Abstract: A dissonance is a conflict between individual, collective or organizational knowledge. This concept is extended to a conflict of autonomy between components of a human-machine system, such as Cyber-Physical & Human Systems (CPHS). The autonomy of the CPHS is modeled by a triplet representing three sets of knowledge: the Competence, the Availability and the Prescription. The so-called Competence-Availability-Prescription (CAP) model is then proposed to represent the capability of the CPHS to act alone and to control possible emergent behaviours such as conflicts of autonomy in terms of competence, availability and/or prescription. The formalism of the Petri nets is used to model the three CAP model parameters and to control possible conflicts between them. A feasibility study of the application of such CAP model and Petri nets is presented for the car driving domain involving the car driver interacting with Cyber-Physical Systems (CPS) such as an Automated Speed Control System (ASCS).
Doc 809 : A Hybrid Trust Evaluation Framework for E-Commerce in Online Social Network: A Factor Enrichment Perspective
The nature of autonomy and openness of E-commerce in online social (ECOS) networks poses a challenge to the security of transactions as it is difficult to ensure the reliability and trustworthiness of parties on both ends. Transactions in ECOS may, therefore, be conducted in an unreliable environment and be vulnerable to frauds. Trust management schemes, naturally, have come as feasible solutions. With a view to making improvement on the existing trust management mechanisms, we, in this paper, propose a factor-enrichment-based hybrid trust framework for trust measurement in ECOS, in which three levels of trust are used to establish trustworthy opinions among individuals for their transactions: 1) private reputation, which is defined as subjective trustworthy impression among individuals with respect to its feature of dynamic evolution; 2) common reputation, which is defined as collective and sharable trust degree and is proposed with two factors, a consistency factor and a continuity factor, introduced for enhancing the reliability of common reputation; and 3) the hybrid trust, which is proposed to obtain integrated trustable impressions based on private reputation and common reputation, with anti-fraud factor and confidence factor presented to further determine the trustworthiness of hybrid trust. Finally, we list the results of a series of examinations to further verify the performance of our mechanism.
Doc 844 : Enhancing the Meaningfulness of Work for Astronauts on Long Duration Space Exploration Missions
Numerous authors have identified the stressors likely to be encountered on long duration space exploration missions (e.g., to Mars), including the possibility of significant crises, separation from family, boredom/monotony, and interpersonal conflict. Although many authors have noted that meaningful work may be beneficial for astronauts on these missions, none have detailed the sources of meaningful work for astronauts and how these sources may differ between astronauts. The present article identifies how engagement in meaningful work during long duration missions may mitigate the adverse effects of demands and increase the potential for benefits resulting from the missions.Semistructured interviews were conducted with nine NASA personnel, including astronauts, flight directors, and flight surgeons. Questions addressed sources of meaning for astronauts, characteristics of tasks that enhance vs. detract from meaning, and recommendations for enhancing meaning.Personnel mentioned contributing to humanity and the next generation, contributing to the mission, and exploration as the most meaningful aspects of their work. Characteristics of tasks that enhanced meaning included using a variety of skills, feeling personal control over their schedule, autonomy in the execution of tasks, and understanding the importance of the experiments conducted on the mission. Top recommendations to sustain meaning were insuring social needs were met through such activities as the strategic use of social media, giving astronauts autonomy as well as structure, and conducting training during transit.Implications are addressed for tailoring meaning-based interventions for astronauts participating on long duration missions and assessing the effectiveness of these interventions.Britt TW, Sytine A, Brady A, Wilkes R, Pittman R, Jennings K, Goguen K. Enhancing the meaningfulness of work for astronauts on long duration space exploration missions. Aerosp Med Hum Perform. 2017; 88(8):779-783.
Abstract Multi-faceted systems of the future will entail complex logic and reasoning with many levels of reasoning in intricate arrangement. The organization of these systems involves a web of connections and demonstrates self-driven adaptability. They are designed for autonomy and may exhibit emergent behavior that can be visualized. They will impact manufacturing industry, defense, healthcare, energy, transportation, emergency response, agriculture and society overall. The success will come how the current challenges related to cybersecurity, interoperability, privacy, safety and socio-technical aspects mainly interaction of human behavior and complex adaptive systems are handled Complex Adaptive Systems have dynamically changing meta-architectures. Finding an optimal architecture for these systems is a multi-criteria decision making problem often involving many objectives in the order of 20 or more. This creates “Pareto Breakdown“ which prevents ordinary multi-objective optimization approaches from effectively searching for an optimal solution; saturating the decision maker with large sets of solutions that may not be representative for a compromise architecture selection from the solution space. Our quest continues to handle complexities to design and operate these systems. The challenge in Complex Adaptive Systems design is to create an organized complexity that will allow a system to achieve its goals. Researchers from academia, industry and government met in Chicago, Illinois, on October 30 to November 1, 2017, to share their findings and expand the boundaries of research in Complex Adaptive Systems. This year we are concentered on the current state of practice in Engineering Cyber Physical Systems. This publication of the Complex Adaptive Systems Proceedings series contains the edited versions of the technical presentations of Complex Adaptive Systems held October 30 to November 1, 2017, in Chicago, Illinois, U.S.A. The extended version of each selected paper was reviewed by two referees, then revised, edited and condensed to the format herein. I would like to express my gratitude to the plenary speakers at the conference for their invaluable contributions through their talks. Further, I wish to express my gratitude to all authors for their contributions to this volume of proceedings and for their presentations at the conference, as well as, to all referees for their comments and suggestions for revising the papers. I would like to mention our appreciation to the conference sponsors for bringing real life dimension, issues and engineering problems to the meeting. I would also like to thank Sue Turner and Latesha Zach for all their help and efforts that enabled me to sail smoothly in the organization of this conference and production of this volume.
Doc 893 : Cyber-physical-social system between a humanoid robot and a virtual human through a shared platform for adaptive agent ecology
Two artificial agents U+0028 a humanoid robot and a virtual human U+0029 are enriched with various similar intelligence, autonomy, functionalities and interaction modalities. The agents are integrated in the form of a cyber-physical-social system U+0028 CPSS U+0029 through a shared communication platform to create a social ecology. In the ecology, the agents collaborate U+0028 assist each other U+0029 to perform a real-world task U+0028 search for a hidden object U+0028 for the benefits of humans. A robot-virtual human bilateral trust model is derived and a real-time trust measurement method is developed. The role of taking initiative in the collaboration is switched between the agents following a finite state machine model triggered by bilateral trust, which results in a mixedinitiative collaboration. A scheme is developed to evaluate the performance of the agents in the ecology through the CPSS. The results show that the robot and the virtual human perform satisfactorily in the collaboration through the CPSS. The results thus prove the effectiveness of the real-world ecology between artificial agents of heterogeneous realities through a shared platform based on trust-triggered mixed-initiatives. The results can help develop adaptive social ecology comprising intelligent agents of heterogeneous realities to assist humans in various tasks through collaboration between the agents in the form of a CPSS.
Doc 909 : Innovation Potentials and Pathways Merging AI, CPS, and IoT
Recent advances in the areas of Artificial Intelligence (AI) in the informatics field, Cyber-Physical Systems (CPS) in the production field, and Internet of Things (IoT) in the logistics and transportation field have induced a tremendous growth and innovation potential for global value chain setups. The question is not if further innovation and automation will happen but when—sooner than later—and how. Independent of physical production innovations (additive manufacturing) the information integration and decision autonomy tendencies themselves will drive new supply chain and customer interaction designs and business models. This article presents a technology forecast model based on extensive descriptions of developments by field as well as interaction traits. Results suggest that the crucial element in AI and technology application in logistics will be the human factor and human-artificial cooperation capacities and attitudes.
Doc 921 : Test Bed of Semantic Interaction of Smart Objects in the Web of Things
https://doi.org/10.3390/fi10050042 Santiago Guerrero-Narváez Miguel-Ángel Niño-Zambrano Dalila-Jhoana Riobamba-Calvache Gustavo-Adolfo Ramírez-González
Semantic interaction in the Internet of Things (IoT) is an important concept within current IoT development, given that smart things require further autonomy with greater processing, storage, and communication capacities. The problem is now becoming one of how to get these things to interact and collaborate with each other; to form intelligent environments amongst themselves and thus generate better services for users. This article explores a solution approach that consists in providing collaborative behavior to smart things, through the incorporation of an ontology and an architecture. It makes possible things that can communicate and collaborate with each other, allowing the generation of new services of interaction according to user needs. For this task, a real test bed of smart things was created, in which the proposed solution was deployed (Smart Room). Finally, it was concluded that the creation of these types of test bed is feasible, taking into account that response times and the information delivered by the different managed processes are acceptable. New challenges were encountered, however, such as problems of critical region in test beds with conflicting services and management of multiple users.
Doc 922 : Towards the Internet of Agents: An Analysis of the Internet of Things from the Intelligence and Autonomy Perspective
Recently, the scientific community has demonstrated a special interest in the process related to the integration of the agent-oriented technology with Internet of Things (IoT) platforms. Then, it arises a novel approach named Internet of Agents (IoA) as an alternative to add an intelligence and autonomy component for IoT devices and networks. This paper presents an analysis of the main benefits derived from the use of the IoA approach, based on a practical point of view regarding the necessities that humans demand in their daily life and work, which can be solved by IoT networks modeled as IoA infrastructures. It has been presented 24 study cases of the IoA approach at different domains ––smart industry, smart city and smart health wellbeing–– in order to define the scope of these proposals in terms of intelligence and autonomy in contrast to their corresponding generic IoT applications.
Doc 926 : The Human Takeover: A Call for a Venture into an Existential Opportunity
We propose a venture into an existential opportunity for establishing a world ‘good enough’ for humans to live in. Defining an existential opportunity as the converse of an existential risk—that is, a development that promises to dramatically improve the future of humanity—we argue that one such opportunity is available and should be explored now. The opportunity resides in the moment of transition of the Internet—from mediating information to mediating distributed direct governance in the sense of self-organization. The Internet of tomorrow will mediate the execution of contracts, transactions, public interventions and all other change-establishing events more reliably and more synergistically than any other technology or institution. It will become a distributed, synthetically intelligent agent in itself. This transition must not be just observed, or exploited instrumentally: it must be ventured into and seized on behalf of entire humanity. We envision a configuration of three kinds of cognitive system—the human mind, social systems and the emerging synthetic intelligence—serving to augment the autonomy of the first from the ‘programming’ imposed by the second. Our proposition is grounded in a detailed analysis of the manner in which the socio-econo-political system has evolved into a powerful control mechanism that subsumes human minds, steers their will and automates their thinking. We see the venture into the existential opportunity described here as aiming at the global dissolution of the core reason of that programming’s effectiveness—the critical dependence of the continuity of human lives on the coherence of the socially constructed personas they ‘wear.’ Thus, we oppose the popular prediction of the upcoming, ‘dreadful AI takeover’ with a call for action: instead of worrying that Artificial Intelligence will soon come to dominate and govern the human world, let us think of how it could help the human being to finally be able to do it.
Doc 928 : Distributed control with rationally bounded agents in cyber-physical production systems
Abstract Cyber-physical production systems are transforming traditional, hierarchical control structures into distributed ones in which the elements offer and consume services with a high degree of autonomy. This paper proposes an agent-based approach to distributed control for production environments in which the agents are only able to interact with a part of the whole system. It hypothesises that the performance of the agent network can be improved through learning and communication. A description of the approach is presented and illustrated with a simulation case of distributed control for an industrial compressed-air system.
Doc 935 : Survey of advances and challenges in intelligent autonomy for distributed cyber‐physical systems
With the evolution of the Internet of things and smart cities, a new trend of the Internet of simulation has emerged to utilise the technologies of cloud, edge, fog computing, and high-performance computing for design and analysis of complex cyber-physical systems using simulation. These technologies although being applied to the domains of big data and deep learning are not adequate to cope with the scale and complexity of emerging connected, smart, and autonomous systems. This study explores the existing state-of-the-art in automating, augmenting, and integrating systems across the domains of smart cities, autonomous vehicles, energy efficiency, smart manufacturing in Industry 4.0, and healthcare. This is expanded to look at existing computational infrastructure and how it can be used to support these applications. A detailed review is presented of advances in approaches providing and supporting intelligence as a service. Finally, some of the remaining challenges due to the explosion of data streams; issues of safety and security; and others related to big data, a model of reality, augmentation of systems, and computation are examined.
Doc 945 : The Entropy Based Approach to Modeling and Evaluating Autonomy and Intelligence of Robotic Systems
This review paper presents the Entropy approach to modeling and performance evaluation of Intelligent Machines (IMs), which are modeled as hierarchical, multi-level structures. It provides a chronological summary of developments related to intelligent control, from its origins to current advances. It discusses fundamentals of the concept of Entropy as a measure of uncertainty and as a control function, which may be used to control, evaluate and improve through adaptation and learning performance of engineering systems. It describes a multi-level, hierarchical, architecture that is used to model such systems, and it defines autonomy and machine intelligence for engineering systems, with the aim to set foundations necessary to tackle related challenges. The modeling philosophy for the systems under consideration follows the mathematically proven principle of Increasing Precision with Decreasing Intelligence (IPDI). Entropy is also used in the context of N-Dimensional Information Theory to model the flow of information throughout such systems and contributes to quantitatively evaluate uncertainty, thus, autonomy and intelligence. It is explained how Entropy qualifies as a unique, single, measure to evaluate autonomy, intelligence and precision of task execution. The main contribution of this review paper is that it brings under one forum research findings from the 1970’s and 1980’s, and that it supports the argument that even today, given the unprecedented existing computational power, advances in Artificial Intelligence, Deep Learning and Control Theory, the same foundational framework may be followed to study large-scale, distributed Cyber Physical Systems (CPSs), including distributed intelligence and multi-agent systems, with direct applications to the SmartGrid, transportation systems and multi-robot teams, to mention but a few applications.
Doc 946 : MULTIDISCIPLINARY ENGINEERING TO SOLVE THE PROBLEM OF CONGESTIONING IN VEHICULAR SYSTEMS
1913/5000 The pretension to develop an engineering project to solve circulation problems in large cities with the guidance of advanced technologies has led to the observation of several additional factors that affect the overall performance of the system. The most relevant are the polluting emissions, the physical and emotional health of the population and the excesses in the journey times. The vehicle entity, even with attributes of autonomy and connectivity, interacts with the entities road infrastructure, human driver and cybernetic link, determinants in the results.
Microsoft, together with Cambridge Massachusetts tech startup, Composable Analytics, and Apttus, headquartered in San Mateo delivered an innovative executive level thought leadership event this fall called R.I.S.E. (Robotics, Intelligence, Society, Economy). The challenge was clear - rapid advancements in artificial intelligence (AI), machine learning (ML), robotics, social media, autonomy, image and speech recognition, and other areas of technology will continue to have massive economic and social impact, raising entirely new sets of questions, presenting challenges and new opportunities in every part of society. Key business stakeholders essential to identifying practical, realistic future solutions for these technologies are often not at the table to discuss digital transformation plans or are not aware of the near-term nature of changing technological landscape – what the World Economic Forum calls, a Fourth Industrial Revolution.
Doc 968 : Team Situation Awareness in Human-Autonomy Teaming: A Systems Level Approach
Project overview. The current study focuses on analyzing team flexibility by measuring entropy (where higher values correspond to system reorganization and lower values correspond to more stable system organization) across all-human teams and Human-Autonomy Teams (HAT). We analyzed teams in the context of a fully-fledged synthetic agent that acts as a pilot for a three-agent Remotely Piloted Aircraft System (RPAS) ground crew. The synthetic agent must be able to communicate and coordinate with human teammates in a constructive and timely manner to be effective. This study involved three heterogeneous team members who had to take photographs of target waypoints and communicate via a text-based communication system. The three team members’ roles were: 1) navigator provides information about flight plan with speed and altitude restrictions at each waypoint; 2) pilot adjusts altitude and airspeed to control the Remotely Piloted Aircraft (RPA), and negotiates with the photographer about the current altitude and airspeed to take good photos for the targets; and 3) photographer screens camera settings, and sends feedback to other team members regarding the target photograph status. The three conditions differed based on the manipulation of the pilot role: 1) Synthetic – the pilot was the synthetic agent, 2) Control – the pilot was a randomly assigned participant, and 3) Experimenter – the pilot was a well-trained experimenter who focused on sending and receiving information in a timely manner. The goal of this study is to examine how overall RPAS flexibility across HATs and all-human teams are associated with Team Situation Awareness (TSA). Method. There were 30 teams (10-teams per condition): control teams consisted of three participants randomly assigned to each role; synthetic and experimenter teams included two participants randomly assigned to the navigator and photographer roles. The experiment took place over five 40-minute missions, and the goal was to take as many “good” photos of ground targets as possible while avoiding alarms and rule violations. We obtained several measures, including mission and target level team performance scores, team process measures (situation awareness, process ratings, communication and coordination), and other measures (teamwork knowledge, workload, and demographics). We first estimated amount of system reorganization of the RPAS via an information entropy measure, i.e., the number of arrangements the system occupied over a given period of time (Shannon & Weaver, 1975). Based on information entropy, we defined four layers to represent the RPAS (Gorman, Demir, Cooke, & Grimm, In Review): 1) communications - the chat-based communication among team members; 2) vehicle - the RPA itself, e.g., speed, altitude; 3) control - interface between the RPA and the user; and system - the overall activity of the sub-layers. Then, we looked at the relationship between layers and TSA, which was based on successfully overcoming and completing ad hoc embedded target waypoints. Results and conclusion. Overall, the experimenter teams adapted to more roadblocks than the synthetic teams, who were equivalent to control teams (Demir, McNeese, & Cooke, 2016). The findings indicate that: 1) synthetic teams demonstrated rigid systems level activity, which consisted of less reorganization of communication, control and vehicle layers as conditions changed, which also resulted in less adaptation to roadblocks; 2) control teams demonstrated less communication reorganization, but more control and vehicle reorganization, which also resulted in less adaptation to roadblocks; and 3) experimenter teams demonstrated more reorganization across communication, control and vehicle layers, which resulted in better adaptation to roadblocks. Thus, the ability of a system to reorganize across human and technical layers as situations change is needed to adapt to novel conditions of team performance in a dynamic task
Doc 972 : Construção de Sistema embarcado para controle sem fio de tensão alternada em experimentos de física
This paper describes the construction, programming and implementation of an embedded system based on free educational software and an embedded control of sinusoidal alternating voltage (AC) for electricity practices in Physics laboratory. Software can be considered educational when properly contextualized in a relationship of teaching and learning. Thus, this work presents educational software developed and applied to digital control, wirelessly via Bluetooth, to change the AC voltage of the grid using Android smartphones or tablets, making the experiment more interactive and playful. The application of educational software in experiments also allows to verify, in real time, the influence of programs change on the physical phenomena and stimulates the logical reasoning development and consequently the autonomy of the students, to the measure that can raise hypotheses, make interferences in the programming and take off conclusions from the practical results obtained.
Doc 980 : Ambiance Intelligence Approach Using IoT and Multi-Agent System
Internet of things is a network of objects mainly supported by electronic devices and electronic components such as sensors and electronic cards. These objects can be physical and virtual devices, sensors or actuators, are autonomous and have their own intelligence characteristics. On the other hand, smart environments are those in which sensors and actuators have been integrated, to react to events and to adapt to those present. The environment acquires intelligence through its intelligent components, or through the intelligence resulting from its interaction with other components. Our contribution is a proposal of Cognitive IoT (CIoT) devices structure by adding an agent layer to the device. Such layer provides the device with agent characteristics (intelligence, autonomy, cooperation and organization).
Doc 984 : ‘Fateful’ vs. ‘everyday’ choices: qualitative differences in choice situations and the dimensions of choicework
This study focuses on choicework in situations of different subjective importance. Psychology students (N=74) and internet sample respondents (N=1,833) were asked to recollect several choice situations of varying importance from their experience and to name, describe, and evaluate them using a number of self-report measures. Combining qualitative and quantitative data analysis, we devised a series of qualitative indicators of choicework (context and content of choice, emotional attitude to the choice process, satisfaction with choice, mindfulness, autonomy, difficulty, and significance) and compared the choice situations on these parameters. Significant and trivial choices differed on a number of variables (more significant situations were characterized by more complicated and conscious choicework). Choice situations with different thematic content also differed in their subjective importance and other parameters of choicework. The results imply the necessity to consider the scale of significance and the thematic content of situations used in choice studies.
Doc 985 : Decentralized Collective Learning for Self-managed Sharing Economies
The Internet of Things equips citizens with a phenomenal new means for online participation in sharing economies. When agents self-determine options from which they choose, for instance, their resource consumption and production, while these choices have a collective systemwide impact, optimal decision-making turns into a combinatorial optimization problem known as NP-hard. In such challenging computational problems, centrally managed (deep) learning systems often require personal data with implications on privacy and citizens’ autonomy. This article envisions an alternative unsupervised and decentralized collective learning approach that preserves privacy, autonomy, and participation of multi-agent systems self-organized into a hierarchical tree structure. Remote interactions orchestrate a highly efficient process for decentralized collective learning . This disruptive concept is realized by I-EPOS, the Iterative Economic Planning and Optimized Selections , accompanied by a paradigmatic software artifact. Strikingly, I-EPOS outperforms related algorithms that involve non-local brute-force operations or exchange full information. This article contributes new experimental findings about the influence of network topology and planning on learning efficiency as well as findings on techno-socio-economic tradeoffs and global optimality. Experimental evaluation with real-world data from energy and bike sharing pilots demonstrates the grand potential of collective learning to design ethically and socially responsible participatory sharing economies.
Doc 987 : Sharpening the Scythe of Technological Change: Socio-Technical Challenges of Autonomous and Adaptive Cyber-Physical Systems
Autonomous and Adaptative Cyber-Physical Systems (ACPS) represent a new knowledge frontier of converging “nano-bio-info-cogno” technologies and applications. ACPS have the ability to integrate new `mutagenic’ technologies, i.e., technologies able to cause mutations in the society. Emerging approaches, such as artificial intelligence techniques and deep learning, enable exponential speedups for supporting increasingly higher levels of autonomy and self-adaptation. In spite of this disruptive landscape, however, deployment and broader adoption of ACPS in safety-critical scenarios remains challenging. In this paper, we address some challenges that are stretching the limits of ACPS safety engineering, including tightly related aspects such as ethics and resilience. We argue that a paradigm change is needed that includes the entire socio-technical aspects, including trustworthiness, responsibility, liability, as well as the ACPS ability to learn from past events, anticipate long-term threads and recover from unexpected behaviors.
Doc 998 : Exploration and Exploitation of New Knowledge Emergence to Improve the Collective Intelligent Decision-Making Level of Web-of-Cells With Cyber-Physical-Social Systems Based on Complex Network Modeling
Through exploration and exploitation of new knowledge emergence, the collective intelligent decision-making (CID) level of Web-of-Cells (WoC) proposed by ELECTRA will be dramatically improved. For this purpose, we thoroughly investigate complex network theory and modeling methods for WoC with cyber–physical–social systems (CPSS). WoC is a new intelligent dispatching framework characterized by weak centralization, self-organization coupling, high independence, efficient coordination, and autonomous learning. Based on these characteristics and actual engineering demands, in this paper, we adopt complex network theory, parallel machine learning, and multi-agent stochastic game theory to address three basic scientific issues in WoC dispatching and control: how to build a complex network model for WoC with CPSS to stimulate new knowledge emergence; how to analyze the evolution structure stability and operation stability during this knowledge emergence process; and how to use the emerged new knowledge to achieve cell autonomy and system-wide coordination based on independent and CID, respectively. Finally, we conduct some explorations and make a prospect for WoC. The biggest innovation of this paper lies in thoroughly investigating how to fully stimulate and utilize new knowledge emergence from WoC to greatly improve its CID level of dispatching and control. This will be of great significance to the development of new-generation power system smart dispatching in the future.
Abstract The internet has had a tremendous impact on our routine life. Recent developments in the Internet of Things (IoT) technology have brought several issues to fore. IoT is a new emerging technology in which the electronic devices communicate through the medium of the Internet for remote sensing and control. IoT is not a single technology; rather it is the convergence of heterogeneous technologies pertaining to different engineering domains. Prominent technologies include Radio Frequency IDentification (RFID), Networking and Communication, Wireless Sensor Networks (WSN), Real-Time Systems (RTS), Cloud computing, etc. In establishing communication between devices, IoT exposes to many challenges on account of the diversity of devices, energy, and power constraints, noise, and interference, stringent requirement of timeliness, etc. Though several researchers have attempted addressing these challenges in IoT, more research is still required for the complete transition from internet protocol version 4 (IPv4), developed in 1981 to IPv6, developed in 2008. Researchers need to focus on increasing the address space to cater to unlimited things on our planet, evolving set of standards for uniformity and developing new energy sources for powering every single sensor. Till date, research has focused on evolving design strategies for energy autonomy in electronic devices deployed in IoT, so as to enable self-powering of a device, in absence of DC supply. Present work reviews research on implementable methods of energy autonomy, reported in the recent literature (during 2010 to 2018), so as to identify immediate issues to be addressed and new directions to be explored for enhancing energy autonomy of devices deployed in IoT. The conclusions of the work will be useful for researchers engaged with addressing issues in energy optimization in various domains of IoT. Also, the conclusions will be of interest to business personnel, sponsoring government agencies, agencies for standardization of performance and academicians engaged in the development of IoT.
Doc 1009 : Unsupervised by any other name: Hidden layers of knowledge production in artificial intelligence on social media
Artificial Intelligence (AI) in the form of different machine learning models is applied to Big Data as a way to turn data into valuable knowledge. The rhetoric is that ensuing predictions work well—with a high degree of autonomy and automation. We argue that we need to analyze the process of applying machine learning in depth and highlight at what point human knowledge production takes place in seemingly autonomous work. This article reintroduces classification theory as an important framework for understanding such seemingly invisible knowledge production in the machine learning development and design processes. We suggest a framework for studying such classification closely tied to different steps in the work process and exemplify the framework on two experiments with machine learning applied to Facebook data from one of our labs. By doing so we demonstrate ways in which classification and potential discrimination take place in even seemingly unsupervised and autonomous models. Moving away from concepts of non-supervision and autonomy enable us to understand the underlying classificatory dispositifs in the work process and that this form of analysis constitutes a first step towards governance of artificial intelligence.
Doc 1012 : Complex reactive event processing for assisted living: The Habitat project case study
https://doi.org/10.1016/j.eswa.2019.02.025 Daniela Loreti Federico Chesani Paola Mello Luca Roffia Francesco Antoniazzi Tullio Salmon Cinotti Giacomo Paolini Diego Masotti Alessandra Costanzo
Abstract While the increasing average age of population is posing new challenges to societies and healthcare systems, the emergence of the Internet of Things research area is generating the hope for automated assisted environments, which could combine the advances in sensors networks with that of runtime monitoring systems, in order to create smart houses able to take care of their older inhabitants and delay the recourse to hospitals and nursing homes. However, although various assisted living systems have been proposed in the last decade, the goal of realizing an effective domestic support system for elderly is still far from reached. In this work, we present a project aiming to re-engineer a set of everyday life objects, equipping them with environmental and wearable sensors, thus to monitor the condition of older people in their domestic residences and provide security while preserving the autonomy and independence of the subjects. The main focus of the paper at hand is on the requirements and solutions implemented to realize the backbone infrastructure of such system as regards both the adopted semantic message routing mechanism and the newly conceived approach to event analysis, which combines Complex Event Processing and a reactive implementation of Event Calculus.
Doc 1047 : Cars and Contemporary Communication| Maps and the Autonomous Vehicle as a Communication Platform
Over the past two decades, there has been growing awareness of and critical interest in the convergence of information and communication technologies and automobiles. Writing in 2004, Mike Featherstone suggests that the “automobile becomes a new form of communications platform with a complex set of possibilities.” In this article, we argue that the notion of the car as a communication platform continues to form a productive way of thinking about autonomous vehicles. The argument we develop is that the dual roles of data acquisition and management, and local processing are integral to any understanding of the contemporary autonomous vehicle’s “machinic complex.” Both of these things are strongly associated with autonomy and the transformation of cars into decision-making machines. We use the example of mapping to argue that these capacities are not unique to the emerging technologies of autonomous vehicles; however, they are essential to them, with significant implications not only for their capabilities as communications platforms, but also more generally for their governance and political economy.
Doc 1060 : Maps and the Autonomous Vehicle as a Communication Platform
R Wilken Julian Thomas
Over the past two decades, there has been growing awareness of and critical interest in the convergence of information and communication technologies and automobiles. Writing in 2004, Mike Featherstone suggests that the automobile becomes a new form of communications platform with a complex set of possibilities. In this article, we argue that the notion of the car as a communication platform continues to form a productive way of thinking about autonomous vehicles. The argument we develop is that the dual roles of data acquisition and management, and local processing are integral to any understanding of the contemporary autonomous vehicle’s machinic complex. Both of these things are strongly associated with autonomy and the transformation of cars into decision-making machines. We use the example of mapping to argue that these capacities are not unique to the emerging technologies of autonomous vehicles; however, they are essential to them, with significant implications not only for their capabilities as communications platforms, but also more generally for their governance and political economy.
Doc 1068 : The Evolution of Human-Autonomy Teams in Remotely Piloted Aircraft Systems Operations
The focus of this current research is two-fold: (1) to understand how team interaction in human-autonomy teams (HAT)s evolve in the Remotely Piloted Aircraft Systems (RPAS) task context, and (2) to understand how HATs respond to three types of failures (automation, autonomy, and cyber-attack) over time. We summarize the findings from three of our recent experiments regarding the team interaction within HAT over time in the dynamic context of RPAS. For the first and the second experiments, we summarize general findings related to team member interaction of a three-member team over time, by comparison of HATs with all-human teams. In the third experiment, which extends beyond the first two experiments, we investigate HAT evolution when HATs are faced with three types of failures during the task. For all three of these experiments, measures focus on team interactions and temporal dynamics consistent with the theory of interactive team cognition. We applied Joint Recurrence Quantification Analysis, to communication flow in the three experiments. One of the most interesting and significant findings from our experiments regarding team evolution is the idea of entrainment, that one team member (the pilot in our study, either agent or human) can change the communication behaviors of the other teammates over time, including coordination, and affect team performance. In the first and second studies, behavioral passiveness of the synthetic teams resulted in very stable and rigid coordination in comparison to the all-human teams that were less stable. Experimenter teams demonstrated metastable coordination (not rigid nor unstable) and performed better than rigid and unstable teams during the dynamic task. In the third experiment, metastable behavior helped teams overcome all three types of failures. These summarized findings address three potential future needs for ensuring effective HAT: (1) training of autonomous agents on the principles of teamwork, specifically understanding tasks and roles of teammates, (2) human-centered machine learning design of the synthetic agent so the agents can better understand human behavior and ultimately human needs, and (3) training of human members to communicate and coordinate with agents due to current limitations of Natural Language Processing of the agents.
Doc 1075 : Integrations between Autonomous Systems and Modern Computing Techniques: A Mini Review
The emulation of human behavior for autonomous problem solving has been an interdisciplinary field of research. Generally, classical control systems are used for static environments, where external disturbances and changes in internal parameters can be fully modulated before or neglected during operation. However, classical control systems are inadequate at addressing environmental uncertainty. By contrast, autonomous systems, which were first studied in the field of control systems, can be applied in an unknown environment. This paper summarizes the state of the art autonomous systems by first discussing the definition, modeling, and system structure of autonomous systems and then providing a perspective on how autonomous systems can be integrated with advanced resources (e.g., the Internet of Things, big data, Over-the-Air, and federated learning). Finally, what comes after reaching full autonomy is briefly discussed.
Doc 1094 : The Master’s Tools Will Never Dismantle The Master’s House: Kavanaugh’s Confirmation Hearing and The Perils of Progressive Punitivism
On October 3, 2018, more than 2,400 law professors signed a letter addressed to the U.S. Senate, titled The Senate Should Not Confirm Kavanaugh. In the letter, the signatories argued that Brett Kavanaugh’s demeanor and tone during his confirmation hearing demonstrated a lack of judicial temperament and evinced his unfitness for high judicial office.
Like many of my colleagues, I was deeply discomfited by the hearings, which reduced fact-finding to a partisan context and echoed the cultural acceptability of disrespect of women’s bodily autonomy — an obviously infuriating state of affairs. And yet, the reaction to the hearings by Kavanaugh’s opponents left me wondering: What were we hoping to accomplish by mocking and reviling this man? By expecting him to apologize or take responsibility and then trumpeting the inadequacy of his response? Was this hearing, and our polarized reactions to it, making inroads for gender equality? Would this become a conversation starter, one that, as my progressive and feminist colleagues argued, would “spark a reckoning” with the system?
This essay argues that the strong sentiments against Kavanaugh, though understandable and keenly felt, might have been deployed in the wrong direction, and an example of a broader phenomenon that I refer to as progressive punitivism. Progressive punitivism is the reliance on weapons traditionally wielded by the conservative right — shaming, stigmatization, denial of rehabilitation, punitive approaches, and identity-driven divisions — in the service of social justice ideals. Progressive punitivism operates within the criminal justice system, in the context of holding violent police officers, hate criminals, sexual abusers, and lenient judges accountable for their actions, but it also operates throughout the realm of social media and public opinion, and these two realms often cross paths in complex ways.
The marshaling of much progressive energy in the direction of punishing powerful individuals for their misdeeds should be viewed with ambivalence. On one hand, the desire for accountability on the part of the powerful and socially advantaged is understandable; on the other, I am deeply skeptical as to the potential of progressive punitivism to effectuate change and bring about mutual understanding, let alone a reckoning. My concern is that the recurrence to punitive methods sows divisiveness and rancor, discredits efforts at rapprochement or apology at the outset and thus discourages them, and directs the movement’s energy in poisonous, and ultimately futile, directions. In short, I think that the effort to dismantle the master’s house of misogyny and racial domination with the master’s tools — a recurrence to punitivism, excoriation, and shaming — is doomed to fail.
This essay proceeds in four parts. In Part I I problematize the idea of the accused’s demeanor as evidence of guilt, remorse, or entitlement, arguing that we tend to overestimate our ability to deduce internal states of mind from people’s behavior and expressions. Part II assesses the potential (or lack thereof) of public performances of reckoning to produce a valuable expression of remorse, discussing the value of contingent apologies. Part III expands the framework to examine the way our politically fractured field responds to partisan efforts to excoriate culprits, arguing that “starting a national conversation” on the basis of excoriation and stigmatization is not a realistic expectation. In Part IV I situate the Kavanaugh incident in the overall context of progressive punitivism, offering an initial and generative sketch of the ideology and its mixed effects. The conclusion offers a modest proposal for a better way to start a bipartisan conversation about gender-based inequities and iniquities, as well as a future agenda for research on progressive punitivism in its other manifestations.
Doc 1100 : Layered Dynamics and System Effectiveness of Human-Autonomy Teams Under Degraded Conditions
Project overview Teamwork can be defined as dynamic team interaction between two or more interdependent members to achieve a shared goal. Many studies have examined how coordination dynamics are associated with team effectiveness in the context of all-human teams (Gorman, Amazeen, & Cooke, 2010), and later, in human-autonomy teams (HAT)s (Demir, Likens, Cooke, Amazeen, & McNeese, 2018). HATs must have autonomous agents that act as effective teammates and help enable HATs to function as collaborative systems. Synergistic relationships among a system’s human and technological components provide the basis for emergent systems-level outcomes. Layered dynamics, a recent empirical modelling technique aimed at achieving this objective (Gorman et al., 2019), considers reorganization of the sociotechnical system across individual components and the overall system. The current study examined layered dynamics of HATs during automation and autonomy failures and addresses how members of HATs interact with each other and technological aspects of the system. Design and Method We utilized a simulated Remotely Piloted Aircrtaft System (RPAS) Synthetic Task Environment with three heteregeneous and interdepedent roles: (1) a navigator, who created a dynamic flight plan and provided waypoint related information; (2) a pilot, who used this information to monitor and adjust settings. The pilot also communicated with the photographer to negotiate settings and enable proper conditions to obtain a good photograph; and (3) a photographer, who monitored and adjusted the camera to take good target photos, and provided feedback to the team. This study utilized a Wizard of Oz paradigm, in which the navigator and photographer were instructed that the pilot was a synthetic agent. However, the pilot was a highly-trained experimenter, in a separate room, who simulated an autonomous agent using limited vocabulary. There were 22 teams, and two participants were randomly assigned to the navigator and photographer roles. This task was comprised of ten 40-minute missions, and teams needed to take as many good photos as possible while avoiding alarms and rule violations. The primary manipulation was the application of three degraded conditions: (1) automation failure - role-level display failures, (2) autonomy failure - autonomous agent’s abnormal behavior, and (3) malicious cyber-attacks - the hijaking of the RPAS, with the synthetic agent providing false, detrimental information. We symbolically represented RPAS using layered dynamics, and calculated entropy measures for each (Gorman et al., 2019): (1) communications: team members interacting within the chat system; (2) vehicle: states of the RPA, including airspeed/altitude, turns, fuel, battery, remaining film, and termperature level; and (3) controls: the interface controls between the RPA and the team members. To measure team performance, we used a time and coordination based metric for each target in each mission. Results and Discussion Our main findings were: 1) vehicle and communication entropy were higher than control entropy and were associated with better adaptation to both failures, and 2) control entropy had a negative association with initial status on team performance, while vehicle entropy had a positive association. These findings describe the tendency of low performing teams to anticipate targets poorly. This was due to a failure to interact with the technology in a timely manner. This lagged effect can be attributed to teams taking too long to interact with the technology. These findings shed light on how the layered dynamics approach can help understand team behavior under degraded conditions. Acknowledgements This research is supported by ONR Award N000141712382 (Program Managers: Marc Steinberg, Micah Clark). We also acknowledge the assistance of Steven M. Shope, Sandia Research Corporation who integrated the synthetic agent and the testbed.
Doc 1102 : Does Team Interaction Exploration Support Resilience in Human Autonomy Teaming?
Project overview As a team explores interactions, they may find opportunities to expand and refine teamwork over time. This can have consequences for team effectiveness in normal and unexpected situations (Woods, 2018). Understanding the role of exploratory team interactions may be relevant for human-autonomy team (HAT) resilience in the face of synthetic agent rigidity and lack of anticipation (Demir et al, 2019). Team interaction exploration was defined as team interactions with qualities (e.g. content, communication medium) unique to a team’s interaction history (Cooke et al., 2013; Hills et al., 2015). This study examines the relationship between team interaction exploration and HAT performance in multiple remotely-piloted aerial system (RPAS) reconnaissance missions with degraded conditions. The goal of the task was to take good photos of target waypoints. In this task, three teammates are assigned to specific roles: the navigator plans the route using a digital map, the pilot (synthetic) controls the RPAS and selects target waypoints, and the photographer calibrates camera settings to take a good photo of a target waypoint. The synthetic agent was capable of routine team coordination without explicit team player qualities. Teams communicated via a text-chat interface. Seven unique degraded conditions were injected throughout ten missions. Three automation failures disrupted RPAS status information on the photographer’s or pilot’s display, and three autonomy failures disrupted the synthetic agent’s comprehension of waypoint information or caused the agent to move on to the next target before a photo was taken. Finally, a malicious cyber-attack caused the synthetic agent to fly the RPAS to an enemy occupied waypoint. Method Forty-four participants were recruited from a large southwestern university in pairs and formed teams (22 teams) to participate in this study. These participants were either undergraduate or graduate students. This experiment consisted of ten 40-minute missions in total that were carried out over two sessions separated by one-to two-week intervals. After a baseline mission, an automation and autonomy failure was injected into each mission while the team processed target waypoints. The malicious cyber-attack occurred during the final 20-minutes of the tenth mission. This study collected a several measures including measures of team process, physiological measures, and surveys of teamwork knowledge, trust, workload, and anthropomorphism which are not considered in this study. Exploratory team interaction was operationalized as any text-message unique in content, sender, or recipient that was unrelated to routine coordination of target waypoints. Teams were grouped using k-means clustering by their target processing efficiency, number of overcome roadblocks, and mission performance. The three clusters ( K = 3) were comparatively described as low- ( N = 7), middle- ( N = 7), and high-performing ( N = 5) teams. A mixed-factor ANOVA compared the frequency of each team’s exploratory interactions by mission and cluster. Results and discussion High-performing teams were distinguished from middle-and low-performing teams in their ability to maintain high levels of overall performance while efficiently processing targets and overcoming many roadblocks. Middle-performing teams were efficient in overcoming roadblocks but had worse mission performance. The findings indicate that 1) high-performing teams explored team interactions more than middle-performing teams, 2) there was no significant difference in exploration frequency between high-and low-performing teams, and 3) teams explored more in the first session than the second session, with the exception of the final mission. Overall, exploratory team interaction differentiated HAT performance in normal and degraded conditions and should be further examined at other levels of interaction, such as content meaning and interaction patterns.
Doc 1103 : Team Coordination of Team Situation Awareness in Human-Autonomy Teaming
Project Overview Team Situation Awareness (TSA), which is a part of team cognition, is a critical factor that influences team effectiveness. It can be defined as getting the right information from the right person within the right amount of time, in order to overcome an unexpected event (Gorman, Cooke, Pederson, Connor, & DeJoode, 2005). TSA is developed and maintained through team interactions, allowing for the measurement of TSA based on team interaction (Cooke & Gorman, 2009). In the current study, a specific measure, Coordinated Awareness of Situation by Teams (CAST) is used (Cooke & Gorman, 2009). CAST evaluates the effectiveness and efficiency of team interaction under “roadblock” scenarios (Gorman, Cooke, & Winner, 2006). These roadblocks represent novel situations in the task and require effective team communication and coordination. Team members must assess the situation according to their own specialized role and/or resources and coordinate with other team members to overcome each separate roadblock. In this task, effective communication refers to team anticipation. That is, each team member needs to anticipate each other’s needs by pushing information rather than pulling information during the task (Demir, McNeese, & Cooke, 2017). In this study, we examined how pushing and pulling information, and CAST were associated with Team Situation Awareness (TSA) in both Human-Autonomy (HAT) and all-human teams in simulated Remotely Piloted Aircraft System (RPAS) task environment. In this research, we integrated the synthetic agent to the Cognitive Engineering Research on Team Tasks Remotely Piloted Aircraft Systems - Synthetic Task Environment (CERTT-RPAS-STE) which was designed to be both a flexible research platform and a realistic task environment with a view to researching team performance and interaction-based measures of team cognition. In the simulated CERTT testbed, there are three heterogeneous teammates who need to take good photos of each target waypoint by communicating via text-chat: (1) the navigator who creates a dynamic flight plan and provides information about the waypoints, the RPA’s airspeed, and altitude restrictions to the pilot; (2) the pilot, who controls the RPA’s heading, altitude, and airspeed, and negotiates with the photographer in order to take a good photo; and (3) the photographer, who monitors sensor equipment in order to take photographs of target waypoints and sends feedback to the other team members about the quality of the photo. This project aimed to understand how team behaviors and team performance differed between HATs and all-human teams in RPAS operations: (1) the synthetic condition—the pilot role was given to the synthetic teammate, which was an ACT-R based cognitive model (which had a limited interaction ability, see Ball et al., 2010; Demir et al., 2015); (2) the control condition—the pilot was a randomly selected human participant, just like the other two participants; and (3) the experimenter condition—one of the experimenters served as an expert pilot. Experimenter condition utilized a Wizard of Oz paradigm in which a trained experimenter (located in a separate room) used a script to imitate a synthetic teammate and communicated with participants in limited communication behaviors but pushing and pulling information in a timely manner (robust coordination). Method There were 30 teams (10 for each condition): control teams consisted of three participants randomly assigned to each role; synthetic and experimenter teams included two participants randomly assigned to the navigator and photographer roles. The experiment took place over five 40-minute missions, and the goal was to take as many “good” photos of ground targets as possible while avoiding alarms and rule violations. During each mission, teams were presented with “roadblocks” by the introduction of a new, ad hoc target waypoint. We collected several measures, but we focused on: the proportion of roadblocks overcome per mission as an outcome measure of TSA; the CAST which is a coordination sequence of team interaction across the team members (i.e. which team members share with team members their experience during the roadblock); and verbal behaviors such as pushing and pulling information. Results and discussion In this team task, effective teamwork involves anticipating the needs of teammates, which in turn means pushing information before it is requested. However, in addition to anticipation, effective coordination is also needed during roadblocks. HATs demonstrated significantly lower levels of CAST than all-human teams. These results indicate that HATs’ lack of anticipation and coordination resulted in poorer TSA performance. These findings help HATs to grow its coordination and communication methodologies. Finally, future studies might examine the relationships highlighted in this study via nonlinear measures in terms of team stability and flexibility based on their communication and coordination patterns during the novel events. HAT is here to stay but improvements to human-machine interactions must continue if we are to improve team effectiveness.
Doc 1104 : An Empirical Exploration of Resilience in Human-Autonomy Teams Operating Remotely Piloted Aircraft Systems
Project overview Team resilience is an interactive and dynamic process that develops over time while a team maintains performance. This study aims to empirically investigate systems-level resilience in a Remotely Piloted Aircraft (RPA) System simulated task environment by examining team interaction during novel events. The approach used in the current study to measure systems-level resilience was developed by Hoffman & Hancock (2017). In their conceptual study, resilience was considered a key feature of success in emerging complex sociotechnical systems; in our case, that is applied to Human-Autonomy Teams (HATs). Hoffman and Hancock conceptualized a resilience measure dynamically by means of several components, such as the time it took the system to recognize and characterize anomalies, and the time taken to specify and achieve new goals. In their framework, there were two main sub-events which expressed resilience via time-based measures, and upon which we designed ours in this study: (1) time taken to design a new process and (2) time required to implement it (Hoffman & Hancock, 2017). Design In this current research, there were three heterogeneous team members who used a text-based system to communicate and successfully photograph target waypoints: (1) navigator – provided information regarding a flight plan with speed and altitude restrictions of each waypoint; (2) pilot – controlled the RPA by adjusting its altitude and airspeed through negotiating with the photographer in order to take a good photo of the target waypoints; and (3) photographer – screened camera settings and sent feedback to the other team members regarding the status of target’s photograph. This study followed the Wizard of Oz paradigm wherein the navigator and photographer were seated together in one room and were told that the pilot was a synthetic agent. In actuality, the pilot was a well-trained experimenter who was working from a separate room. This ‘synthetic’ pilot used restricted vocabulary to simulate that of a computer. The main manipulations in this study consisted of three degraded conditions: (1) automation failure - role-level display failures while processing specific targets, (2) autonomy failure - autonomous agent behaved abnormally while processing specific targets (i.e., it provided misinformation to other team members or demonstrated incorrect actions), and (3) malicious cyber-attacks - the hijacking of the synthetic agent, which led to the synthetic agent providing false, detrimental information to the team about the RPA destination. Because the malicious cyber-attack only occurred once (during the final mission), we will focus on the automation and autonomy failures for this study. Each failure was imposed at a selected target waypoint and the teams had to find a solution in a limited amount of time. The time limit for each failure was related to the difficulty of the failure. Each failure was introduced at a pre-selected target waypoint for each team. Method In this experiment, there were 22 teams, with only two participants randomly assigned to the navigator and photographer roles for each team, because the pilot was a highly-trained experimenter. The current task was comprised of ten 40-minute missions in which teams needed to take as many “good” photos as possible of ground targets while avoiding alarms and rule violations. For this study, using the RPAS paradigm, we calculated two team resilience scores (1) time taken to design a new process and (2) time required to implement it (Hoffman & Hancock, 2017). For the calculations, we used the message sent time (in seconds) for each role to express resilience in terms of the proportion of total task time (2400 seconds). As an outcome measure, we used target processing efficiency as a coordination and time-based performance score, which was based on how quickly teams were able to take a good photo of each target. Results and discussion We found that teams were more resilient during automation failures and progressed toward targets more successfully than during autonomy failures. We see three possible explanations for this: (1) automation failures were more explicit than autonomy failures, since at least one team member interacted with other teammates; (2) autonomy failures took more time for human teammates to identify the failure, because the autonomous agent’s abnormal behavior was not as straight forward; and 3) human teammates overtrusted to the autonomous agent and lack confidence in themselves and let the failure go on. Acknowledgements This research is supported by ONR Award N000141712382 (Program Managers: Marc Steinberg, Micah Clark). We also acknowledge the assistance of Steven M. Shope of Sandia Research Corporation, who integrated the synthetic agent and the testbed.
Doc 1110 : Trust in Management Information Systems (MIS)
Abstract. Digitalization, enhanced storage capacities, and the Internet of Things increase the volume of data in modern organizations. To process and make use of these data and to avoid information overload, management information systems (MIS) are introduced that collect, process, and analyze relevant data. However, a precondition for the application of MIS is that users trust them. Extending accounts of trust in automation and trust in technology, we introduce a new model of trust in MIS that addresses the conceptual ambiguities of existing conceptualizations of trust and integrates initial empirical work in this field. In doing so, we differentiate between perceived trustworthiness of an MIS, experienced trust in an MIS, intentions to use an MIS, and actual use of an MIS. Moreover, we consider users’ perceived risks and contextual factors (e. g., autonomy at work) as moderators. The introduced model offers guidelines for future research and initial suggestions to foster trust-based MIS use.
Doc 1112 : Information Processing in Work Environment 4.0 and the Beneficial Impact of Intentional Forgetting on Change Management
Abstract. Industry 4.0, based on increasingly progressive digitalization, is a global phenomenon that affects every part of our work. The Internet of Things (IoT) is pushing the process of automation, culminating in the total autonomy of cyber-physical systems. This process is accompanied by a massive amount of data, information, and new dimensions of flexibility. As the amount of available data increases, their specific timeliness decreases. Mastering Industry 4.0 requires humans to master the new dimensions of information and to adapt to relevant ongoing changes. Intentional forgetting can make a difference in this context, as it discards nonprevailing information and actions in favor of prevailing ones. Intentional forgetting is the basis of any adaptation to change, as it ensures that nonprevailing memory items are not retrieved while prevailing ones are retained. This study presents a novel experimental approach that was introduced in a learning factory (the Research and Application Center Industry 4.0) to investigate intentional forgetting as it applies to production routines. In the first experiment ( N = 18), in which the participants collectively performed 3046 routine related actions (t1 = 1402, t2 = 1644), the results showed that highly proceduralized actions were more difficult to forget than actions that were less well-learned. Additionally, we found that the quality of cues that trigger the execution of routine actions had no effect on the extent of intentional forgetting.
Doc 1126 : From What to How: An Initial Review of Publicly Available AI Ethics Tools, Methods and Research to Translate Principles into Practices
Abstract The debate about the ethical implications of Artificial Intelligence dates from the 1960s (Samuel in Science, 132(3429):741–742, 1960. 10.1126/science.132.3429.741 ; Wiener in Cybernetics: or control and communication in the animal and the machine, MIT Press, New York, 1961). However, in recent years symbolic AI has been complemented and sometimes replaced by (Deep) Neural Networks and Machine Learning (ML) techniques. This has vastly increased its potential utility and impact on society, with the consequence that the ethical debate has gone mainstream. Such a debate has primarily focused on principles—the ‘what’ of AI ethics (beneficence, non-maleficence, autonomy, justice and explicability)—rather than on practices, the ‘how.’ Awareness of the potential issues is increasing at a fast rate, but the AI community’s ability to take action to mitigate the associated risks is still at its infancy. Our intention in presenting this research is to contribute to closing the gap between principles and practices by constructing a typology that may help practically-minded developers apply ethics at each stage of the Machine Learning development pipeline, and to signal to researchers where further work is needed. The focus is exclusively on Machine Learning, but it is hoped that the results of this research may be easily applicable to other branches of AI. The article outlines the research method for creating this typology, the initial findings, and provides a summary of future research needs.
Doc 1153 : Residents’ views on cyclist safety and cycling infrastructure in the City Municipality of Celje
The paper presents the results of two research studies analysing the views of different target populations on cyclist safety and the adequacy of preventive measures in Slovenia. The purpose of both research studies was to identify the shortcomings of different approaches to ensuring cyclist safety, evaluate the adequacy of planned solutions and propose some improvements of preventive actions taken by various stakeholders.
Design/Methods/Approach:
A field survey was conducted among the residents of the City Municipality of Celje (n = 171) on their satisfaction with cyclist safety, while an online survey was carried out among internet users (n = 210) on the usefulness of an alternative approach to raise cyclists’ awareness by digitising cycling routes and safety risks.
Findings:
Results show that respondents are generally not satisfied with cyclist safety, as most believe that municipal efforts are insufficient to ensure it. It was observed that cycling infrastructure needs to be properly regulated and that a positive traffic culture should be promoted at the municipal level, including through the promotion of preventive activities. Both internet users and local residents recognise a strong need to digitise cycle paths by indicating safety risks. Therefore, it would be reasonable to upgrade conventional approaches to raising public awareness by introducing solutions that are useful for cyclists.
Research Limitations / Implications:
The limitation of the research study arises from the fact that its results cannot be generalised to all municipalities, since they apply different approaches to ensuring cyclist safety due to their autonomy and are facing different safety risks. Moreover, due to the use of a non-random sample, caution is necessary when generalising research results.
Practical Implications:
The results of the presented studies are primarily useful to decision-makers and infrastructure managers at national and local level when planning changes and safety measures in the field of cyclist safety.
Originality/Value:
The paper encompasses two studies, the findings of which complement one another substantially and provide a deeper insight into the issues of cyclist safety and preventive actions taken by different stakeholders. They serve as a basis for further exploring this issue in different local environments with a view of obtaining a more comprehensive insight into the key challenges of traffic safety from a broader perspective.
Doc 1154 : Explicit Autonomy, Implicit Control: User Autonomy in the Dichotomous Choice Architecture of Facebook
The article aims to unravel the implicit soft-control of users by Facebook through the framework of choice architecture as proposed by Thaler and Sunstein (2008) in their well-known work Nudge. It explores the dichotomous foundation of Facebook’s choice architecture in which users are given an apparent sense of autonomy while the platform implicitly soft-controls them. This paradoxical power plays functions through two broad types of nudges—activity-inducing nudges and passivity-inducing nudges. The mechanism of functioning of these nudges on Facebook is explored while noting the behavioural patterns they induce among users. Facebook’s transition from information architecture to choice architecture is also explored. Thus, the article aims to contribute to the growing body of work, which has underscored the importance of assessing web 2.0 as the ‘technological unconscious’ which has penetrated everyday life.
Doc 1173 : Artificial Intelligence as a New Category of Civil Law
This research gives consideration to the legal status of artificial intelligence technology. Artificial intelligence as a future technology is actively expanding its capabilities at the present stage of development of society. In this regard, the concept of ‘artificial intelligence’ and the application of the rule of right in resolving issues of legal responsibility for the operation of artificial intelligence technologies require definition. The main purpose of this study is to define the concept of ‘artificial intelligence’ and determine whether artificial intelligence technologies are the object or subject of right. The article provides the analysis of possible approaches to the disclosure of the concept of ‘artificial intelligence’ as a legal category and its relationship with the concepts of ‘robot’ and ‘cyberphysical system’. The issues of legal responsibility for the operation of artificial intelligence are revealed. For the purposes hereof, the methods of collecting and studying the singularities; generalizations; the methods of scientific abstraction; the methods of cognition of consistent patterns, as well as the method of objectivity, concreteness, pluralism and a whole range of other methods were used. The study has concluded that the artificial intelligence technology is an autonomous self-organizing computer-software or cyberphysical system with the ability and capability to think, learn, make decisions independently, perceive and model surrounding images and symbols, relationships, processes and implement its own decisions. The following general properties of artificial intelligence technologies have been defined: autonomy; the ability to perceive the conditions (situation), make and implement own decisions; the ability to adapt own behavior, to learn, to communicate with other artificial intelligence, to consider, accumulate and reproduce experience (including human experience). Within the present historical period, artificial intelligence technology should be considered as the object of right. The legal responsibility for the operation of artificial intelligence lies with the operator or another person who sets the parameters of its operation and controls its behavior. The creator (manufacturer) of artificial intelligence is also recognized as a responsible person. This conclusion makes it possible to enter the category of artificial intelligence in the legal field and determine the persons responsible for its poor-quality operation.
Doc 1192 : Fog Computing, Edge Computing and a return to privacy and personal autonomy
Abstract Computing in the Fog and at the Edge of the Internet of Things represent the greatest bilateral data exchange in the lives of others since the advent of the telephone. It may offer more than improved response times and reduced latency in home and personal data services. It may reintroduce user and community control over the data lives generated by remote systems introduced by the Internet of Things. At the same time the invasive risks it presents to home and personal security and privacy may increase regulatory complexity to protect those rights. External sensing of human activity through the Smart City, home and personal IOT devices with locational data introduces many complex issues, beneficial and dangerous. These issues are resolved in a legal-technical-social milieu. Together technical and legal controls for Fog and Edge computing may mediate the data avatars we produce and protect privacy, safety personal autonomy. Those controls must be in harmony, directed towards the protection of the people, and subject to sanctions for failure to comply.
Cyber-physical systems operate in our real world, constantly interacting with the environment and collaborating with other systems. The increasing number of devices will make it infeasible to control each one individually. It will also be infeasible to prepare each of them for every imaginable rapidly unfolding situation. Therefore, we must increase the autonomy of future Cyber-physical Systems. Making these systems self-aware allows them to reason about their own capabilities and their immediate environment. In this article, we extend the idea of the self-awareness of individual systems toward networked self-awareness . This gives systems the ability to reason about how they are being affected by the actions and interactions of others within their perceived environment, as well as in the extended environment that is beyond their direct perception. We propose that different levels of networked self-awareness can develop over time in systems as they do in humans. Furthermore, we propose that this could have the same benefits for networks of systems that it has had for communities of humans, increasing performance and adaptability.
Doc 1196 : Self-aware Cyber-Physical Systems
https://doi.org/10.1145/3375716 K.L. Bellman Christopher Landauer Nikil Dutt Lukas Esterle Andreas Herkersdorf Axel Jantsch Nima TaheriNejad Peter R. Lewis Marco Platzner Kalle Tammemäe
In this article, we make the case for the new class of Self-aware Cyber-physical Systems. By bringing together the two established fields of cyber-physical systems and self-aware computing, we aim at creating systems with strongly increased yet managed autonomy, which is a main requirement for many emerging and future applications and technologies. Self-aware cyber-physical systems are situated in a physical environment and constrained in their resources, and they understand their own state and environment and, based on that understanding, are able to make decisions autonomously at runtime in a self-explanatory way. In an attempt to lay out a research agenda, we bring up and elaborate on five key challenges for future self-aware cyber-physical systems: (i) How can we build resource-sensitive yet self-aware systems? (ii) How to acknowledge situatedness and subjectivity? (iii) What are effective infrastructures for implementing self-awareness processes? (iv) How can we verify self-aware cyber-physical systems and, in particular, which guarantees can we give? (v) What novel development processes will be required to engineer self-aware cyber-physical systems? We review each of these challenges in some detail and emphasize that addressing all of them requires the system to make a comprehensive assessment of the situation and a continual introspection of its own state to sensibly balance diverse requirements, constraints, short-term and long-term objectives. Throughout, we draw on three examples of cyber-physical systems that may benefit from self-awareness: a multi-processor system-on-chip, a Mars rover, and an implanted insulin pump. These three very different systems nevertheless have similar characteristics: limited resources, complex unforeseeable environmental dynamics, high expectations on their reliability, and substantial levels of risk associated with malfunctioning. Using these examples, we discuss the potential role of self-awareness in both highly complex and rather more simple systems, and as a main conclusion we highlight the need for research on above listed topics.
Doc 1205 : PROBLEMS OF THE FORMATION OF THE CONCEPTUAL APPARATUS IN THE FIELD OF ARTIFICIAL INTELLIGENCE
The study of the problems of the formation of the conceptual ap para tus in the field of legal support of artificial intelligence to develop effective le gal solutions in order to regulate new digital technologies. The work is based on a set of general scientific and special legal methods, including analysis, description, ge ne ra li zation, com parative law. The formation of legal definitions of artificial intelligence and related concepts (ro bot, cyber-physical system, etc.) requires the identification of the main legal fe atures of artificial intelligence. The following key characteristics of artificial in tel li gence are identified: optional hardware implementation; the ability of the system to analyze the environment; autonomy in operation; the ability to accumulate experience, its assessment and implementation of the task of self-learning; the pre sence of “intelligence”, described through the categories of “rationality”, “rationality” or simply the ability to “think like a person” or “act like a person” in all or in narrowly defined circumstances. Future legal regimes of artificial intelligence should take into account the possibility of weak artificial intelligence as a kind of complex thing and strong artificial intelligence that will “require” new legal solutions. The formation of legal definitions in the field of artificial intelligence must meet the requirements of universality, system city and ethics, and also allow, based on the selected properties, to establish systemic hierarchical relationships between concepts used in the field of artificial intelligence. The study develops the theoretical and practice-oriented provisions of in for mation law in matters of legal support for artificial intelligence, and also creates the basis for the formation of legislation in this area. The work can be used in further research activities on the legal regulation of the development, functioning and use of artificial intelligence systems.
Doc 1238 : Narrative designed proposals of mechanical or electro-mechanical optimization to the current wheelchairs used by people with severe locomotion restraints
https://doi.org/10.35530/it.071.04.1830 Aura Spinu Vladimir Cardei Valeriu Avramescu Ioana Andone Aurelia Romila Aurelian Anghelescu Mihail Tiberiu Avramescu Ana-Maria Bumbea Elena Valentina Ionescu Vlad Ciobanu Cristina Daia Gelu Onose
The field of mechatronic/robotic wearable exoskeletons, specifically those designated for assistance/rehabilitation in severe neuro-/locomotor disabling conditions in the lower limbs, is considered to have a great potential for radically changing the harsh condition of wheelchairs users, by restoring their defining human traits: bipedal, vertical, stance and gait. But even the most advanced such complex devices, are not yet effectively able to largely replace the wheelchairs. Therefore, until the overall complete wheelchairs’ substitutes, will meet, in this purpose, all the necessary related requirements, we have determined, and accordingly, designed – from a double perspective: of professionals and of (a) consumer – a series of necessary and rather accessible/feasible, consistent: mechanical and electro-mechanical improvements, to the current common type of wheelchairs, in order to improve the global functioning, autonomy and consequently, the quality of life in the needing people, with severe mobility restraints. These, for now, narratively innovative concepts and specifically designed, practical/technological-constructive solutions, target 10 main kinds of beneficial outcomes, i.e. a decalogue and their derivatives to be expected (most of them previously imagined by us but not completely designed until now, two of them already achieved but which need updating and other four entirely new), that could result in an “all-in-one” product paradigm which, to our knowledge, is not available at present. This model of wheelchair we propose is, at the same time, modular, so a certain consumer can purchase/ be offered only his/her own case-specific needed optimization components of it
Doc 1240 : Withholding effort in sharing knowledge in online space: differential effects of task characteristics
Purpose Knowledge withholding is an important but under-studied topic, which refers to the phenomenon that individuals give less than full effort to contributing knowledge. This study aims to investigate the differential effects of task characteristics on individuals’ knowledge withholding behavior in online space, this study develops and empirically verifies a theoretical model that covers the five core task characteristics in job characteristics model (autonomy, identity, feedback, skill variety and significance), knowledge sharing self-inefficacy and withholding effort in sharing knowledge. Design/methodology/approach Using an online survey method, this study collected empirical data from 351 general internet users from 30 provincial administrative units in China. The data were analyzed using the structural equation modeling technique of partial least squares. Findings The analysis results indicate that autonomy negatively affects, while identity, skill variety and significance positively affect users’ knowledge withholding behavior in online space through the mediation of knowledge sharing self-inefficacy, and that three task characteristics (autonomy, identity and feedback) strengthen the relationship between knowledge sharing self-inefficacy and knowledge withholding. Practical implications This study provides valuable insights for reducing knowledge withholding behavior in online space. Operationally, different levels of task characteristics such as autonomy, identity and feedback can be set to prevent users from perceiving themselves as inefficacious, and to weaken the behavioral expression of knowledge sharing self-inefficacy. Originality/value This study provides a comprehensive understanding of the differential effects of task characteristics on knowledge withholding in online space, and improves the cognition of the boundaries of withholding effort in sharing knowledge in online space.
Doc 1261 : Specifying autonomy in the Internet of Things: the autonomy model and notation
Driven by digitization in society and industry, automating behavior in an autonomous way substantially alters industrial value chains in the smart service world. As processes are enhanced with sensor and actuator technology, they become digitally interconnected and merge into an Internet of Things (IoT) to form cyber-physical systems (CPS). Using these automated systems, enterprises can improve the performance and quality of their operations. However, currently it is neither feasible nor reasonable to equip any machine with full autonomy when networking with other machines or people. It is necessary to specify rules for machine behavior that also determine an adequate degree of autonomy to realize the potential benefits of the IoT. Yet, there is a lack of methodologies and guidelines to support the design and implementation of machines as explicit autonomous agents such that many designs only consider autonomy implicitly. To address this research gap, we perform a comprehensive literature review to extract 12 requirements for the design of autonomous agents in the IoT. We introduce a set of constitutive characteristics for agents and introduce a classification framework for interactions in multi-agent systems. We integrate our findings by developing a conceptual modeling language consisting of a meta model and a notation that facilitates the specification and design of autonomous agents within the IoT as well as CPS: the Autonomy Model and Notation. We illustrate and discuss the approach and its limitations.
Doc 1271 : Consumer Choicemaking and Choicelessness in Hyperdigital Marketspaces
Technologies, especially Internet-based digital ones, are reshaping choice processes – actual considerations and actions, as well as perceptions of these – in massive, often fundamental, ways. In this paper, our goal is to explore choice processes in general, and especially choice processes in hyperdigital marketspaces (i.e., with massively, pervasively interconnected things) with examples drawn from U.S. macro consumption contexts. We start with a short review of discourses on choice and choicelessness and then shift to the emerging era of technology-shaped choice processes that are especially observable in contemporary hyperdigital marketspaces. For the increasingly large swaths of market segments that consume, indeed live, digitally, we find deft symbolic sublimations and inversions happening, wherein manipulation is perceived as autonomy enhancing.
Doc 1277 : Combining the strength of centralized control and distributed autonomy for crowdsourcing design: An integrated model of Blackboard and Bayesian network
Socialized product design (SPD) mode, enabled by advanced internet technologies and sharing economic trends, has the capability of utilizing the design resources from large numbers of socialized designers (SDs) to carry out the design tasks that used to be participated by only the internal R&D staffs of companies. During SPD projects, different kinds of mechanisms can be applied to organize the SDs. Some of them tend to be centralized control (e.g. crowdsourcing design) and some are more of distributed autonomy (e.g. opensourcing design). Both centralized control and distributed autonomy have their strengths and limitations for SPD. Centralized control enables more organized, focused, and efficient project execution, but it limits the emergence of collective intelligence among the SDs. Distributed autonomy helps to explore the innovation potential of SDs by granting them the freedom of communication and mutual inspiration, but it may cause the problem of unreliable and unpredictable design process. To complement the advantages of centralized control and distributed autonomy in SPD, an integrated model of Blackboard and Bayesian network is established in this paper. The Blackboard, whose Control modules are specially customized for human Knowledge sources, is for guaranteeing overall control of the distributed design process and at the same time permitting certain level of autonomy to the SDs. The Bayesian network, built with an improved Bayesian causal map method, is an embedded Control module of the Blackboard which evaluates design solutions according to the incomplete collective judgments from SDs. The operability of the integrated model has been verified through a case study of 3D printer conceptual design project.
Doc 1278 : Shared Control Between Pilots and Autopilots: An Illustration of a Cyberphysical Human System
The 21st century is witnessing large transformations in several sectors related to autonomy, including energy, transportation, robotics, and health care. Decision making using real-time information over a large range of operations (as well as the ability to adapt online in the presence of various uncertainties and anomalies) is the hallmark of an autonomous system. To design such a system, a variety of challenges must be addressed. Uncertainties may occur in several forms, both structured and unstructured. Anomalies may often be severe and require rapid detection and swift action to minimize damage and restore normalcy. This article addresses the difficult task of making autonomous decisions in the presence of severe anomalies. While the specific application of focus is flight control, the overall solutions proposed are applicable for general complex dynamic systems.
Doc 1292 : A Human-Cyber-Physical System toward Intelligent Wind Turbine Operation and Maintenance
This work proposes a novel concept for an intelligent and semi-autonomous human-cyber-physical system (HCPS) to operate future wind turbines in the context of Industry 5.0 technologies. The exponential increase in the complexity of next-generation wind turbines requires artificial intelligence (AI) to operate the machines efficiently and consistently. Evolving the current Industry 4.0 digital twin technology beyond a sole aid for the human decision-making process, the digital twin in the proposed system is used for highly effective training of the AI through machine learning. Human intelligence (HI) is elevated to a supervisory level, in which high-level decisions made through a human–machine interface break the autonomy, when needed. This paper also identifies and elaborates key enabling technologies (KETs) that are essential for realizing the proposed HCPS.
Although many are concerned that autonomous weapon systems may make war “too easy,” no one has addressed how their use may alter the distribution of the constitutional war power. Drones, cyber operations, and other technological advances in weaponry already allow the United States to intervene militarily with minimal boots on the ground, and increased autonomy in weapon systems will further reduce risk to soldiers. As human troops are augmented and supplanted by robotic ones, it will be politically easier to justify using force, especially for short-term military engagements. Accordingly, one of the remaining incentives for Congress to check presidential warmongering—popular outrage at the loss of American lives—will diminish. The integration of autonomous weapon systems into U.S. military forces will therefore contribute to the growing concentration of the war power in the hands of the Executive, with implications for the international doctrine of humanitarian intervention.
Doc 1339 : МЕТОДИЧНІ АСПЕКТИ ФОРМУВАННЯ ГОТОВНОСТІ БАКАЛАВРІВ З КІБЕРБЕЗПЕКИ ДО ПРОФЕСІЙНОЇ ДІЯЛЬНОСТІ ЗАСОБАМИ КОЛАБОРАТИВНОГО НАВЧАННЯ В УМОВАХ ОСВІТНЬО-ЦИФРОВОГО СЕРЕДОВИЩА
The article outlines the methodological aspects of forming the readiness of bachelors in cybersecurity for professional activities by means of collaborative learning in an educational and digital environment that can increase the level of their professional training. The readiness of a bachelor in cybersecurity for professional activity is outlined, which in essence is a selective and predictable activity of a bachelor in cybersecurity at the stage of its preparation for professional activity and arises from the moment of determining the purpose of activity on the basis of perceived needs and motives, personality plan, guidelines and models for future action. It is proved that the readiness of a bachelor in cybersecurity for professional activity can be formed by means of collaborative learning in an educational and digital environment. That is, the readiness of the bachelor of cybersecurity for professional activities, we can understand essentially selective and predictable activity of the bachelor of cybersecurity in the preparation for the profession and arises from the moment of determining the purpose of activities based on perceived needs and motives, personality plan, guidelines and models for future action. The structure of virtual modeling in the online laboratory for bachelors in cybersecurity in the educational-digital environment is determined, which consists of three interfaces: the interface of the higher education applicant, the interface of the teacher and the interface of the administrator. It is noted that it is important in the collaborative training of bachelors in cybersecurity that the environment in which they interact should be based on democracy, equality and autonomy, with this rule training will be at a high level and as a result will be successful. The organization of collaborative training of bachelors in cybersecurity in an educational and digital environment should be based on the principles of social contact. It was found out that the main principles of collaborative learning in mobile applications for bachelors in cybersecurity in the educational-digital environment included the systematic establishment of social contacts during the training of bachelors in cybersecurity, development of communication, building relationships between bachelors in cybersecurity in educational digital environment on democracy and autonomy, changing the role of participants in the educational process in time depending on the nature and nature of the problem and psychological comfort.
Doc 1343 : ARTIFICIAL INTELLIGENCE IN MEDICAL PRACTICE: REGULATIVE ISSUES AND PERSPECTIVES
The aim of the research is to identify specific of AI in healthcare, its nature, and specifics and to establish complexities of AI implementation in healthcare and to propose ways to eliminate them. Materials and methods: This study was conducted during June-October of 2020. Through a broad literature review, analysis of EU, USA regulation acts, scientific researches and opinions of progressive-minded people in this sphere this paper provide a guide to understanding the essence of AI in healthcare and specifics of its regulation. It is based on dialectical, comparative, analytic, synthetic and comprehensive methods. Results: One of the first broad definitions of AI sounded like “Artificial Intelligence is the study of ideas which enable computers to do the things that make people seem intelligent … The central goals of Artificial Intelligence are to make computers more useful and to understand the principles which make intelligence possible.” There are two approaches to name this technology - “Artificial intelligence” and “Augmented Intelligence.” We prefer to use a more common category of “Artificial intelligence” rather than “Augmented Intelligence” because the last one, from our point of view, leaves much space for “human supervision” meaning, and that will limit the sense of AI while it will undoubtedly develop in future. AI in current practice is interpreted in three forms, they are: AI as a simple electronic tool without any level of autonomy (like electronic assistant, “calculator”), AI as an entity ith some level of autonomy, but under human control, and AI as an entity with broad autonomy, substituting human’s activity wholly or partly, and we have to admit that the first one cannot be considered as AI at all in current conditions of science development. Description of AI often tends to operate with big technological products like DeepMind (by Google), Watson Health (by IBM), Healthcare’s Edison (by General Electric), but in fact, a lot of smaller technologies also use AI in the healthcare field – smartphone applications, wearable health devices and other examples of the Internet of Things. At the current stage of development AI in medical practice is existing in three technical forms: software, hardware, and mixed forms using three main scientific-statistical approaches – flowchart method, database method, and decision-making method. All of them are useable, but they are differently suiting for AI implementation. The main issues of AI implementation in healthcare are connected with the nature of technology in itself, complexities of legal support in terms of safety and efficiency, privacy, ethical and liability concerns. Conclusion: The conducted analysis makes it possible to admit a number of pros and cons in the field of AI using in healthcare. Undoubtedly this is a promising area with a lot of gaps and grey zones to fill in. Furthermore, the main challenge is not on technology itself, which is rapidly growing, evolving, and uncovering new areas of its use, but rather on the legal framework that is clearly lacking appropriate regulations and some political, ethical, and financial transformations. Thus, the core questions regarding is this technology by its nature is suitable for healthcare at all? Is the current legislative framework looking appropriate to regulate AI in terms of safety, efficiency, premarket, and postmarked monitoring? How the model of liability with connection to AI technology using in healthcare should be constructed? How to ensure privacy without the restriction of AI technology use? Should intellectual privacy rights prevail over public health concerns? Many questions to address in order to move in line with technology development and to get the benefits of its practical implementation.
Doc 1359 : Towards trustworthy Cyber-physical Production Systems: A dynamic agent accountability approach
Smart manufacturing is a challenging trend being fostered by the Industry 4.0 paradigm. In this scenario Multi-Agent Systems (MAS) are particularly elected for modeling such types of intelligent, decentralised processes, thanks to their autonomy in pursuing collective and cooperative goals. From a human perspective, however, increasing the confidence in trustworthiness of MAS based Cyber-physical Production Systems (CPPS) remains a significant challenge. Manufacturing services must comply with strong requirements in terms of reliability, robustness and latency, and solution providers are expected to ensure that agents will operate within certain boundaries of the production, and mitigate unattended behaviours during the execution of manufacturing activities. To address this concern, a Manufacturing Agent Accountability Framework is proposed, a dynamic authorization framework that defines and enforces boundaries in which agents are freely permitted to exploit their intelligence to reach individual and collective objectives. The expected behaviour of agents is to adhere to CPPS workflows which implicitly define acceptable regions of behaviours and production feasibility. Core contributions of the proposed framework are: a manufacturing accountability model, the representation of the Leaf Diagrams for the governance of agent behavioural autonomy, and an ontology of declarative policies for the identification and avoidance of ill-intentioned behaviours in the execution of CPPS services. We outline the application of this enhanced trustworthiness framework to an agent-based manufacturing use-case for the production of a variety of hand tools.
Doc 1362 : Criminal Aspects of Robotics Applications
https://doi.org/10.46398/cuestpol.3968.38 Fedor R. Sundurov Ildar Begishev Zarina Ilduzovna Khisamova Igor Izmailovich Bikeev Elvira Yuryevna Latypova Timur Radikovich Ishbuldin
Direct and indirect criminological risks of the use of robotics are analyzed and issues of responsibility of the manufacturer (developer)and/or owner (user) of robotics are discussed for acts committed with their participation. This essay discusses promising areas of legal research related to robotics and its corresponding legal regulations. The definition of robotics is based and proposed as all categories of robots in their broadest sense, regardless of their purpose, degree of danger, mobility, or autonomy, as well as cyber-physical systems with artificial intelligence in any form. It is proposed to recognize socially dangerous acts committed with the use of robotics as crimes committed in a generally dangerous way if there are grounds for doing so. It is concluded that the commission of acts through robotics is capable, in certain cases, of creating a plurality of crimes in the form of a real aggregate. The expanding powers of State security bodies, which can carry out the functions of state policy development, legal regulation, control, and supervision in the field of robotics application, have been verified.
Doc 1369 : Embodied Cooperation to Promote Forgiving Interactions With Autonomous Machines
During every waking moment, we must engage with our environments, the people around us, the tools we use, and even our own bodies to perform actions and achieve our intentions. There is a spectrum of control that we have over our surroundings that spans the extremes from full to negligible. When the outcomes of our actions do not align with our goals, we have a tremendous capacity to displace blame and frustration on external factors while forgiving ourselves. This is especially true when we cooperate with machines; they are rarely afforded the level of forgiveness we provide our bodies and often bear much of our blame. Yet, our brain readily engages with autonomous processes in controlling our bodies to coordinate complex patterns of muscle contractions, make postural adjustments, adapt to external perturbations, among many others. This acceptance of biological autonomy may provide avenues to promote more forgiving human-machine partnerships. In this perspectives paper, we argue that striving for machine embodiment is a pathway to achieving effective and forgiving human-machine relationships. We discuss the mechanisms that help us identify ourselves and our bodies as separate from our environments and we describe their roles in achieving embodied cooperation. Using a representative selection of examples in neurally interfaced prosthetic limbs and intelligent mechatronics, we describe techniques to engage these same mechanisms when designing autonomous systems and their potential bidirectional interfaces.
In this review, the latest research on wearable chemosensors is presented. In focus are the results from open literature, mainly from the last three years. The progress in wearable chemosensors is presented with attention drawn to the measuring technologies, their ability to provide robust data, the manufacturing techniques, as well their autonomy and ability to produce power. However, from statistical studies, the issue of patients’ trust in these technologies has arisen. People do not trust their personal data be transferred, stored, and processed through the vastness of the internet, which allows for timely diagnosis and treatment. The issue of power consumption and autonomy of chemosensor-integrated devices is also studied and the most recent solutions to this problem thoroughly presented.
Doc 1385 : Autonomy loss, privacy invasion and data misuse as psychological barriers to peer-to-peer collaborative car use
Abstract The present study aimed to identify psychological barriers which potentially prevent individuals from implementing collaborative car use in their every-day mobility behaviour. We suggested a model consisting of four psychological barriers: Autonomy Loss, Privacy Invasion, Interpersonal Distrust, and Data Misuse. Perceived Financial Benefit was included as a main incentive for collaborative car use. Using two samples, a community (N = 176) and a student sample (N = 265), three forms of peer-to-peer collaborative car use were examined: lending your own car to another private person (Lending To), renting a car from another private person (Renting From) and sharing rides with others (Ridesharing). For all three forms, a standardised questionnaire was developed which included the psychological barriers, self-reported collaborative car use intention and behaviour, and evaluations of scenarios. The results showed that different barriers predicted specific forms of collaborative car use: Autonomy Loss was connected negatively with Ridesharing and Privacy Invasion predicted Lending To negatively. Data Misuse was related negatively with Renting From, when the renting was arranged via internet. Interpersonal Distrust showed no predictive value for collaborative car use. Perceived Financial Benefit was a consistent incentive for all forms of collaborative car use. Overall, the results confirm the relevance of psychological barriers for collaborative car use. Practical implications to overcome the psychological barriers are discussed.
Doc 1398 : Research Trends and Future Perspectives in Marine Biomimicking Robotics
https://doi.org/10.3390/s21113778 Jacopo Aguzzi Corrado Costa Marcello Calisti Valerio Funari Sergio Stefanni Roberto Danovaro Helena I. Gomes F. Vecchi Lewis Dartnell Peter Weiss Kathrin Nowak Damianos Chatzievangelou Simone Marini
Mechatronic and soft robotics are taking inspiration from the animal kingdom to create new high-performance robots. Here, we focused on marine biomimetic research and used innovative bibliographic statistics tools, to highlight established and emerging knowledge domains. A total of 6980 scientific publications retrieved from the Scopus database (1950–2020), evidencing a sharp research increase in 2003–2004. Clustering analysis of countries collaborations showed two major Asian-North America and European clusters. Three significant areas appeared: (i) energy provision, whose advancement mainly relies on microbial fuel cells, (ii) biomaterials for not yet fully operational soft-robotic solutions; and finally (iii), design and control, chiefly oriented to locomotor designs. In this scenario, marine biomimicking robotics still lacks solutions for the long-lasting energy provision, which presently hinders operation autonomy. In the research environment, identifying natural processes by which living organisms obtain energy is thus urgent to sustain energy-demanding tasks while, at the same time, the natural designs must increasingly inform to optimize energy consumption.
Doc 1412 : Knowledge in the model of dynamics and stability of an industrial enterprise
The formation and development of the knowledge economy logically lead to the need for an indepth study of the nature and properties of information in economic systems. The priority value of the information resource and knowledge in the activities of industrial enterprises is determined both by the prospects for the development of science-intensive products and their competitiveness in the world and domestic markets, and by the desire to ensure the stable operation of the enterprise in a highly disturbed business environment. With the search and analysis of the dependences of the production, financial and economic processes taking place in them on the control information, the systemic interaction of the enterprise resources and the dynamics of key indicators of the activity of industrial enterprises are revealed.From the standpoint of thermodynamics, statistical physics and cybernetics, it is proposed to spread the information concept of V. Trapeznikov to substantiate and modelling the dependence of a number of economic and financial indicators on the amount of accumulated information in the controlled complex of an industrial enterprise.Modeling and interpretation of the relationship between the volume of control information with the uncertainty of the functioning of an industrial enterprise and the achieved level of knowledge about it is carried out. The condition for the stability of the indicator of the effect of activity and the financial stability of the enterprise (autonomy ratio) is formulated. The mathematical condition for the range of changes in the level of knowledge of industrial enterprises has been substantiated.
Doc 1416 : Error, Reliability and Health-Related Digital Autonomy in AI Diagnoses of Social Media Analysis
The rapid expansion of computational tools and of data science methods in healthcare has, undoubtedly, raised a whole new set of bioethical challenges. As Laacke and colleagues (2021) rightly note,…
In “Artificial Intelligence, Social Media and Depression,” Laacke and colleagues (2021) consider the ethical implications of artificial intelligence depression detector (AIDD) tools to assist pract…
Doc 1423 : Achieving Ethical Algorithmic Behaviour in the Internet of Things: A Review
The Internet of Things is emerging as a vast, inter-connected space of devices and things surrounding people, many of which are increasingly capable of autonomous action, from automatically sending data to cloud servers for analysis, changing the behaviour of smart objects, to changing the physical environment. A wide range of ethical concerns has arisen in their usage and development in recent years. Such concerns are exacerbated by the increasing autonomy given to connected things. This paper reviews, via examples, the landscape of ethical issues, and some recent approaches to address these issues concerning connected things behaving autonomously as part of the Internet of Things. We consider ethical issues in relation to device operations and accompanying algorithms. Examples of concerns include unsecured consumer devices, data collection with health-related Internet of Things, hackable vehicles, behaviour of autonomous vehicles in dilemma situations, accountability with Internet of Things systems, algorithmic bias, uncontrolled cooperation among things, and automation affecting user choice and control. Current ideas towards addressing a range of ethical concerns are reviewed and compared, including programming ethical behaviour, white-box algorithms, black-box validation, algorithmic social contracts, enveloping IoT systems, and guidelines and code of ethics for IoT developers; a suggestion from the analysis is that a multi-pronged approach could be useful based on the context of operation and deployment.
Doc 1457 : A Survey of Human Activity Recognition in Smart Homes Based on IoT Sensors Algorithms: Taxonomies, Challenges, and Opportunities with Deep Learning
Recent advances in Internet of Things (IoT) technologies and the reduction in the cost of sensors have encouraged the development of smart environments, such as smart homes. Smart homes can offer home assistance services to improve the quality of life, autonomy and health of their residents, especially for the elderly and dependent. To provide such services, a smart home must be able to understand the daily activities of its residents. Techniques for recognizing human activity in smart homes are advancing daily. But new challenges are emerging every day. In this paper, we present recent algorithms, works, challenges and taxonomy of the field of human activity recognition in a smart home through ambient sensors. Moreover, since activity recognition in smart homes is a young field, we raise specific problems, missing and needed contributions. But also propose directions, research opportunities and solutions to accelerate advances in this field.
Doc 1461 : A DAG Blockchain Enhanced User-autonomy Spectrum Sharing Framework for 6G-enabled IoT
The rapidly growing number of Internet of Things (IoT) devices poses new challenges for spectrum management in future wireless communication networks. It is critical to achieve efficient and dynamic spectrum management in the sixth-generation wireless communication networks (6G) era. To tackle the challenges of managing a large-scale IoT network with heterogeneous devices, we propose a directed acyclic graph (DAG) blockchain enhanced user-autonomy spectrum sharing model. As the proposed consensus rule is closely related to system utility, the swarm intelligence of users gradually reaches the point of convergence in the process of blockchain consensus. We analyse the effect of the tip selection method of DAG blockchain on spectrum allocation utility. A dynamic tip selection method is proposed to enhance the global utility, which is related to the spectrum supply-demand. In addition, the ring signature technique is utilized to realize privacy protection during the sharing process. Simulation indicates that the proposed tip selection method achieves a 10% enhancement in terms of the global utility. Furthermore, significant reductions in administrative expense and reliability improvement are demonstrated by the simulation results. The stability of the tip number in the proposed model has been proved theoretically, which is also validated by simulation experiments.
Doc 1465 : Artificial Intelligence and its Application in Various Fields
The term ‘AI’ is not a new term but the actual meaning of ai is still hidden. Artificial intelligence is a branch of computer science that aims to create machines which are as intelligent as human beings. AI mainly focus on some questions like knowledge required while thinking, the way knowledge can be presented and the way knowledge can be used in other field’s viz. Robotics. Scope of AI is much wider than our thinking. It is not limited to only one or two areas rather in coming future everything will be directly or indirectly linked to AI. Much research has been done on artificial intelligence which has shown that by the end of 2020 many works which was not possible by human beings will be efficiently and accurately can be carried out by the help of robots. Robotics is a branch of engineering that deals with formation, designing, manufacturing, operation of robots. Artificial intelligence is being applied to many areas which are capable to solve many problems like in robotics, e-commerce, domestic chores, medical treatment, gaming, mathematics, military planning etc. The main idea behind the merging of artificial intelligence and robotics is to optimize the level of autonomy through learning. In the coming future we can surely overcome the disadvantages of robots like misuse of it with the help of facial recognition. Or we can use AI in other fields like in cyber security to prevent the systems from being hacked. The applications of AI and how we can implement other applications in coming time are discussed adding to it how we can overcome the disadvantages of using robots in regular life are also discussed.
Doc 1475 : Algorithmic Ethics: Formalization and Verification of Autonomous Vehicle Obligations
We develop a formal framework for automatic reasoning about the obligations of autonomous cyber-physical systems, including their social and ethical obligations. Obligations, permissions and prohibitions are distinct from a system’s mission, and are a necessary part of specifying advanced, adaptive AI-equipped systems. They need a dedicated deontic logic of obligations to formalize them. Most existing deontic logics lack corresponding algorithms and system models that permit automatic verification. We demonstrate how a particular deontic logic, Dominance Act Utilitarianism (DAU), is a suitable starting point for formalizing the obligations of autonomous systems like self-driving cars. We demonstrate its usefulness by formalizing a subset of Responsibility-Sensitive Safety (RSS) in DAU; RSS is an industrial proposal for how self-driving cars should and should not behave in traffic. We show that certain logical consequences of RSS are undesirable, indicating a need to further refine the proposal. We also demonstrate how obligations can change over time, which is necessary for long-term autonomy. We then demonstrate a model-checking algorithm for DAU formulas on weighted transition systems, and illustrate it by model-checking obligations of a self-driving car controller from the literature.
Doc 1493 : Work-Life-Integration Through Flexible Work Arrangements: A Holistic Approach to Work Life Balance
Last few years HR professionals, organizations, employers and other stake holders are taking initiatives to create work-life balance. But as the approach says to keep work and family domains separate, modern work culture and work demands goes in reverse direction. The remaining credit to overlapping of work and non-work activities goes to technology and gadgets like smartphone, tablet, laptop etc. It is obvious that it is not possible to keep these domains isolated to each other and with the limited time and energy obligations of both sides cannot be fulfilled by keeping them separate. It is better to integrate both work and non-work activities so that one can manage his or her time and energy to do all activities in same time. To integrate the different domains, autonomy is required for individuals to choose time, location and quantity of work. Selecting a write flexible work arrangement, will integrate work and life and give a balance between them.
Doc 1500 : Analysis on the Education Mechanism of the “Learning Power” Platform from the Perspective of Media Convergence
As a media learning platform, the “Learning Power” platform integrates the advantages of the internet, big data, and new media. Through the supply of massive explicit and implicit learning resources as well as the construction of the interactive space of “Learning Power,” it fully embodies the education mechanism of moral education. Specifically, it is reflected in the distinctive political position and the education goal mechanism of “moral education,” the education operation mechanism of “explicit and implicit unity,” the learning mechanism of “autonomy and cooperation integration,” and the feedback incentive mechanism of “gamification.” The organic combination and interactive operation of these four mechanisms form a collaborative education mechanism system of goal orientation, education operation, learning process, and feedback incentive.
Doc 1501 : Domestic Autonomy and Environmental International Non-Governmental Organizations: A Cross-National Analysis of Forest Loss
Non-technical Summary Nations across the world are concerned with environmental issues like forest loss. The majority of nations acknowledge the importance of reducing forest loss, and make commitments to do so. However, researchers often find that despite these commitments, and the work of non-governmental organizations, in many nations, forest loss is not declining. This research argues that institutional capacity, specifically a nations domestic autonomy may help explain the ineffectiveness of environmental international non-governmental organizations (EINGOs) at reducing forest loss. Specifically, I argue that nations with stronger domestic autonomy, measured as the extent to which a nation is free of the direct control of external political actors, improves the effectiveness of EINGOs at reaching their goals of reducing forest loss due to an autonomous state’s relative strength and ability to integrate their version of environmentalism or reinterpret existing norms of environmentalism into EINGO ideologies and activities. Technical Summary Previous research finds that environmental international non-governmental organizations (EINGOs) tend to have differential impacts on environmental factors cross-nationally, such as forest loss. More recent work argues that decoupling between stated environmental norms and actual environmental outcomes may be the result of a lack of institutional capacity. Using ordinary least squares regression for 91 low- and middle-income nations from 2001 to 2014, I find that EINGOs reduce forest loss more in nations with higher rather than lower levels of domestic autonomy. However, I find that EINGOs and domestic autonomy on their own do not significantly predict forest loss. Social media summary This research argues that a nations domestic autonomy may help explain the ineffectiveness EINGOs at reducing forest loss.
Doc 1502 : A two-tier Blockchain framework to increase protection and autonomy of smart objects in the IoT
In recent years, the Internet of Things paradigm has become pervasive in everyday life attracting the interest of the research community. Two of the most important challenges to be addressed concern the protection of smart objects and the need to guarantee them a great autonomy. For this purpose, the definition of trust and reputation mechanisms appears crucial. At the same time, several researchers have started to adopt a common distributed ledger, such as a Blockchain, for building advanced solutions in the IoT. However, due to the high dimensionality of this problem, enabling a trust and reputation mechanism by leveraging a Blockchain-based technology could give rise to several performance issues in the IoT. In this paper, we propose a two-tier Blockchain framework to increase the security and autonomy of smart objects in the IoT by implementing a trust-based protection mechanism. In this framework, smart objects are suitably grouped into communities. To reduce the complexity of the solution, the first-tier Blockchain is local and is used only to record probing transactions performed to evaluate the trust of an object in another one of the same community or of a different community. Periodically, after a time window, these transactions are aggregated and the obtained values are stored in the second-tier Blockchain. Specifically, stored values are the reputation of each object inside its community and the trust of each community in the other ones of the framework. In this paper, we describe in detail our framework, its behavior, the security model associated with it and the tests carried out to evaluate its correctness and performance.
This paper explores paradoxical tensions and their management in modular solution networks on digital platforms. A case study approach was adopted to examine how two firms in the lighting facility and ICT industries use digital platforms to coordinate their diverse, large and dynamic modular solution networks. Our findings reveal that due to contradictory goals in offering diverse customized solutions, solution providers with digital platforms are facing several paradoxical tensions between flexibility and efficiency, control and autonomy, and standardization and customization. We find that solution providers cope with these paradoxes through implementing two simultaneous mechanisms: unification and diversification. While the diversification mechanism aims at increasing variety among modules and module providers, unification focuses on forming similarities among them. These mechanisms are made possible through digital platform features, such as algorithms, online communities and platform access. This study contributes to the B2B literature by highlighting the nested nature of paradoxical tensions in the context of modular solution networks and by identifying mechanisms for managing these paradoxes. It also offers practical tools to solution providers to identify and manage paradoxical tensions in platform-based modular solution networks. • This study reveals three types of paradoxes in platform-based solution networks. • These paradoxes can be managed by unification and diversification mechanisms. • These mechanisms are made possible through digital platform features. • The study also offers practical tools to identify and manage these paradoxes.
Doc 1537 : Diagnose the viability of separation of power in Egypt: a cybernetics perspective
Purpose The aim of this study is to diagnose the Egyptian political system from the perspective of the viable system model (VSM). That in turn gives us significant insights on how to improve the distribution and control of power in Egypt to warrant greater autonomy and representativeness in the branches of government. Design/methodology/approach The proposed diagnostic framework in this study was accomplished by applying the principles and aspects of Stafford Beer’s VSM to the three state powers of Egypt: the executive, legislative and judiciary. In addition, it adopts the guidelines for applying the VSM described by both Stafford Beer and José Pérez Ríos. Findings The presented framework in this study shows a comprehensive explanation of the essential functions and their distribution among the entities of the Egyptian political system. In addition to that, the findings give us insights about the need for greater autonomy and self-organization among the political system’s entities. Greater communication, consensus, cooperation and coordination is needed to achieve balance, regulate the interactions between them and to monitor each other without intervention. This, in turn, would ensure greater representation of citizen demands and expectations, and protect and guarantee their rights and freedoms. Originality/value The modeling of the Egyptian political system from a VSM perspective suggests the necessity of reform of the political system by a clear division of functions and tasks within the operational entities and subentities, decentralization of power structures and utilization of the decision marketing mechanism to reach public acceptance. The application of the VSM, for the sake of diagnosing and designing the viability of separation of power, can be considered as highly original.
Doc 1544 : Immersive and interactive cyber-physical system (I2CPS) and virtual reality interface for human involved robotic manufacturing
• Immersive and interactive cyber-physical system (I 2 CPS) framework for collaboration between human, autonomy, and physical systems. • Middleware and data-to-operation protocol were combined to communicate information with the wide range of contexts. • A new interface with virtual reality to provide the collaborative environment for augmenting human skills and autonomy. Smart manufacturing promotes the demand of new interfaces for communication with autonomies such as big data analysis, digital twin, and self-decisive control. Collaboration between human and the autonomy becomes imperative factor for improving productivity. However, current human-machine interfaces (HMI) such as 2D screens or panels require human knowledge of process and long-term experience to operate, which is not intuitive for beginning workers or is designed to work with the autonomy. This study proposes a human interface framework of cyber-physical system (CPS) based on virtual reality, named as immersive and interactive CPS (I 2 CPS), to create an interface for human-machine-autonomy collaboration. By combination of data-to-information protocol and middleware, MTConnect and Robot Operating System (ROS), heterogeneous physical systems were integrated with virtual assets such as digital models, digital shadows, and virtual traces of human works in the virtual reality (VR) based interface. All the physical and virtual assets were integrated in the interface that human, autonomy, and physical system can collaborate. Applying constraints in the VR interface and deploying virtual human works to industrial robots were demonstrated to verify the effectiveness of the I 2 CPS framework, showing collaboration between human and autonomy: augmentation of human skills by autonomy and virtual robot teaching to generate automatic robot programs.
Doc 1547 : From cybersin to cybernet. Considerations for a cybernetics design thinking in the socialism of the XXI century
From its origins, cybernetics has based its desire on the concept of transverse nature, today transdisciplinary. Within its history, the breaking point is unquestionably Stafford Beer and the VMS applied in Salvador Allende’s government. Chile’s historical conditions and context undoubtedly allowed a series of conceptual emergencies that were not necessarily developed after the 1973 coup d’état. Beer’s design, as he claims, could serve both a socialist vision and a fascist command. This tells us that the tool depends on the hand of the administrator. On the other hand, good but insufficient attempts have been made in the field of biologies, such as the theory of autopoiesis and epistemological positions concerning the observer, which have not been able to add value to the VMS. The errors in the design of the VMS can be summarized as follows: confusion of interactions with relationships, confusing co-autonomy with self-organization, confusion of centrism and centralities necessarily as central and establish isomorphisms in a mathematical system aiming at conceptual homologation. As is the case with Information and Entropy. This work shows that the VMS must obligatorily migrate to a Relational Viable system, whose bases are the relations of cooperation and reciprocity based on heterarchical structures for limited or scarce material energy resources. This is the basis of the socialist design which forces the economy to reduce the production of Non-Required Variety.
Doc 1555 : Towards Autonomous Driving Using Vision Based Intelligent Systems
Vision Based systems have become an integral part when it comes to autonomous driving. The autonomous industry has seen a made large progress in the perception of environment as a result of the improvements done towards vision based systems. As the industry moves up the ladder of automation, safety features are coming more and more into the focus. Different safety measurements have to be taken into consideration based on different driving situations. One of the major concerns of the highest level of autonomy is to obtain the ability of understanding both internal and external situations. Most of the research made on vision based systems are focused on image processing and artificial intelligence systems like machine learning and deep learning. Due to the current generation of technology being the generation of “Connected World”, there is no lack of data any more. As a result of the introduction of internet of things, most of these connected devices are able to share and transfer data. Vision based techniques are techniques that are hugely depended on these vision based data.
Doc 1561 : To the question of modernity and corresponding representations of subjectivity: “dividual” and social organizations
This article is part of the extensive research of value characteristics of IT community and their potential for innovative development. In view of this, the author examines the problem of determination of the modern cultural situation and, as a result, deduction of the criterion that establishes what “modern” is. Based on the previously acquired conclusions (according to which, such criterion is the consideration of irreducible multiplicity, and the major risk is the loss of autonomy by individuals), the author aims to find the model of subjectivity that would be simultaneously multiple and autonomous. The search for the model that meets such criterion is carried out through referring to the control technologies developed within the framework of third-order cybernetics. The novelty of this article consists in peculiarity of the approach towards solution of the set task: the question in the form and method of determination of modernity has been first raised and solved within the framework of the Russian academic philosophical community at the previous stage of research, which determines the relevance at the current stage as well. Comparison of the developments of Russian and foreign authors reveals the parallelism between the development of cybernetics and psychoanalysis; demonstrates the conceptual intersections of the third-order cybernetics and schizoanalysis; establishes the conformity between the subjectivity of the individual and the type of organizations they are engaged in. The article offers the concept of “dividual” as a model suitable for solution of the set task. The conclusions is made that the existing potential of the IT community for modernization is neutralized in the context of transition towards the hierarchical structure of the organization.
Doc 1569 : Impact of artificial intelligence on civilization: Future perspectives
Artificial intelligence is a scientific term that refers to artifacts, detect situations and respond to those circumstances with actions. The ability to create such improved artifacts has more impact on our society. This paper describes the economic and social changes with the use of artificial intelligence since the beginning of smartphones. Smartphones have contributed significantly to big data and that adds more efficiency to machine learning. Artificial intelligence goes on to explain the political, economic, and personal issues that humanity will face soon, as well as regulatory strategies to address them. In general, Artificial intelligence isn’t always as precise a generation as one would possibly anticipate, and the problems it increases can be extra vital as a result. Because of extended get entry to understanding of each people and nations, there’s a danger of threatening identification and autonomy.
Doc 1600 : Hierarchical Human-Inspired Control Strategies for Prosthetic Hands
The abilities of the human hand have always fascinated people, and many studies have been devoted to describing and understanding a mechanism so perfect and important for human activities. Hand loss can significantly affect the level of autonomy and the capability of performing the activities of daily life. the technological improvements have led to the development of mechanically advanced commercial prostheses, the control strategies are rather simple (proportional or on/off control). The use of these commercial systems is unnatural and not intuitive, and therefore frequently abandoned by amputees. The components of an active prosthetic hand are the mechatronic device, the decoding system of human biological signals into gestures and the control law that translates all the inputs into desired movements. The real challenge is the development of a control law replacing human hand functions. This paper presents a literature review of the control strategies of prosthetics hands with a multiple-layer or hierarchical structure, and points out the main critical aspects of the current solutions, in terms of human’ functions replicated with the prosthetic device. The paper finally provides several suggestions for designing a control strategy able to mimic the functions of the human hand.
Doc 1617 : Agency in augmented reality: exploring the ethics of Facebook’s AI-powered predictive recommendation system
The development of predictive algorithms for personalized recommendations that prioritize ads, filter content, and tailor our decision-making processes will increasingly impact our society in the upcoming years. One example of what this future might hold was recently presented by Facebook Reality Labs (FRL) who work on augmented reality (AR) glasses powered by contextually aware AI that allows the user to “communicate, navigate, learn, share, and take action in the world” (Facebook Reality Labs 2021). A major feature of those glasses is “the intelligent click” that presents action prompts to the user based on their personal history and previous choices. The user can accept or decline those suggested action prompts depending on individual preferences. Facebook/Meta presents this technology as a gateway to “increased agency”. However, Facebook’s claim presumes a simplistic view of agency according to which our agentive capacities increase parallel to the ease in which our actions are carried out. Technologies that structure people’s lives need to be based on a deeper understanding of agency that serves as the conceptual basis in which predictive algorithms are developed. With the goal of mapping this emerging terrain, the aim of this paper is to offer a thorough analysis of the agency-limiting risks and the agency-enhancing potentials of Facebook’s “intelligent click” feature. Based on a concept of agency by Dignum (Responsible Artificial Intelligence: How to Develop and Use AI in a Responsible Way. Springer International Publishing, Cham, 2019), the three agential dimensions of autonomy (acting independently), adaptability (reacting to changes in the environment), and interactivity (interacting with other agents) are analyzed towards our ability to make self-determining choices.
Doc 1634 : Handling Multifacets of Trust Management in Cyber Physical Systems
Trust should be learnt from history and context sensitive. It should not be absolute in nature. Due to the conglomeration of various technologies in a secure cyber physical system it is quite a challenge to handle trust issues in a cyber physical system. Trust management in cyber physical system is needed due to increase in the degree of autonomy, decentralized policies ,dynamic environment, decision-making based on social rules,customs,laws,values, and ethics. This chapter brings light into the existing strategies already applied by few organizations, their inherent benefits and consequent shortcomings too. There are many factors contributing towards the establishment, expression, evaluation ,maintenance of trustworthiness. In this chapter we advocate a novel framework for trust management which stands up to the research directions of how to build a unified framework for trust management, how to modify the way we compute trust, how to decide the right granularity for a trust model
Doc 1667 : Designing Run-time Evolution for Dependable and Resilient Cyber-Physical Systems Using Digital Twins
The proliferation of Smart Cyber-Physical Systems (SCPS) is increasingly blurring the boundaries between physical and virtual entities. This trend is revolutionizing multiple application domains along the whole human activity spectrum, while pushing the growth of new businesses and innovations such as smart manufacturing, cities and transportation systems, as well as personalized healthcare. Technological advances in the Internet of Things, Big Data, Cloud Computing and Artificial Intelligence have effected tremendous progress toward the autonomic control of SCPS operations. However, the inherently dynamic nature of physical environments challenges SCPS’ ability to perform adequate control actions over managed physical assets in myriad of contexts. From a design perspective, this issue is related to the system states of operation that cannot be predicted entirely at design time, and the consequential need to define adequate capabilities for run-time self-adaptation and self-evolution. Nevertheless, adaptation and evolution actions must be assessed before realizing them in the managed system in order to ensure resiliency while minimizing the risks. Therefore, the design of SCPS must address not only dependable autonomy but also operational resiliency. In light of this, the contribution of this paper is threefold. First, we propose a reference architecture for designing dependable and resilient SCPS that integrates concepts from the research areas of Digital Twin, Adaptive Control and Autonomic Computing. Second, we propose a model identification mechanism for guiding self-evolution, based on continuous experimentation, evolutionary optimization and dynamic simulation, as the architecture’s first major component for dependable autonomy. Third, we propose an adjustment mechanism for self-adaptation, based on gradient descent, as the architecture’s second major component, addressing operational resiliency. Our contributions aim to further advance the research of reliable self-adaptation and self-evolution mechanisms and their inclusion in the design of SCPS. Finally, we evaluate our contributions by implementing prototypes and showing their viability using real data from a case study in the domain of intelligent transportation systems.
Doc 1701 : Building trust and responsibility into autonomous human-machine teams
Harm can be caused to people and property by any highly-automated system, even with a human user, due to misuse or design; but which human has the legal liability for the consequences of the harm is not clear, or even which laws apply. The position is less clear for an interdependent Autonomous Human Machine Team System (A-HMT-S) which achieves its aim by reallocating tasks and resources between the human Team Leader and the Cyber Physical System (CPS). A-HMT-S are now feasible and may be the only solution for complex problems. However, legal authorities presume that humans are ultimately responsible for the actions of any automated system, including ones using Artificial Intelligence (AI) to replace human judgement. The concept of trust for an A-HMT-S using AI is examined in this paper with three critical questions being posed which must be addressed before an A-HMT-S can be trusted. A hierarchical system architecture is used to answer these questions, combined with a method to limit a node’s behaviour, ensuring actions requiring human judgement are referred to the user. The underpinning issues requiring Research and Development (R&D) for A-HMT-S applications are identified and where legal input is required to minimize financial and legal risk for all stakeholders. This work takes a step towards addressing the problems of developing autonomy for interdependent human-machine teams and systems.
Doc 1703 : ARTIFICIAL INTELLIGENCE IN MEDICAL PRACTICE: REGULATIVE ISSUES AND PERSPECTIVES.
The aim of the research is to identify specific of AI in healthcare, its nature, and specifics and to establish complexities of AI implementation in healthcare and to propose ways to eliminate them.Materials and methods: This study was conducted during June-October of 2020. Through a broad literature review, analysis of EU, USA regulation acts, scientific researches and opinions of progressive-minded people in this sphere this paper provide a guide to understanding the essence of AI in healthcare and specifics of its regulation. It is based on dialectical, comparative, analytic, synthetic and comprehensive methods.Results: One of the first broad definitions of AI sounded like “Artificial Intelligence is the study of ideas which enable computers to do the things that make people seem intelligent … The central goals of Artificial Intelligence are to make computers more useful and to understand the principles which make intelligence possible.” There are two approaches to name this technology - “Artificial intelligence” and “Augmented Intelligence.” We prefer to use a more common category of “Artificial intelligence” rather than “Augmented Intelligence” because the last one, from our point of view, leaves much space for “human supervision” meaning, and that will limit the sense of AI while it will undoubtedly develop in future. AI in current practice is interpreted in three forms, they are: AI as a simple electronic tool without any level of autonomy (like electronic assistant, “calculator”), AI as an entity with some level of autonomy, but under human control, and AI as an entity with broad autonomy, substituting human’s activity wholly or partly, and we have to admit that the first one cannot be considered as AI at all in current conditions of science development. Description of AI often tends to operate with big technological products like DeepMind (by Google), Watson Health (by IBM), Healthcare’s Edison (by General Electric), but in fact, a lot of smaller technologies also use AI in the healthcare field - smartphone applications, wearable health devices and other examples of the Internet of Things. At the current stage of development AI in medical practice is existing in three technical forms: software, hardware, and mixed forms using three main scientific-statistical approaches - flowchart method, database method, and decision-making method. All of them are useable, but they are differently suiting for AI implementation. The main issues of AI implementation in healthcare are connected with the nature of technology in itself, complexities of legal support in terms of safety and efficiency, privacy, ethical and liability concerns.Conclusion: The conducted analysis makes it possible to admit a number of pros and cons in the field of AI using in healthcare. Undoubtedly this is a promising area with a lot of gaps and grey zones to fill in. Furthermore, the main challenge is not on technology itself, which is rapidly growing, evolving, and uncovering new areas of its use, but rather on the legal framework that is clearly lacking appropriate regulations and some political, ethical, and financial transformations. Thus, the core questions regarding is this technology by its nature is suitable for healthcare at all? Is the current legislative framework looking appropriate to regulate AI in terms of safety, efficiency, premarket, and postmarked monitoring? How the model of liability with connection to AI technology using in healthcare should be constructed? How to ensure privacy without the restriction of AI technology use? Should intellectual privacy rights prevail over public health concerns? Many questions to address in order to move in line with technology development and to get the benefits of its practical implementation.
Doc 1705 : Algorithmic Distortion of Informational Landscapes
The possible impact of algorithmic recommendation on the autonomy and free choice of Internet users is being increasingly discussed, especially in terms of the rendering of information and the structuring of interactions. This paper aims at reviewing and framing this issue along a double dichotomy. The first one addresses the discrepancy between users’ intentions and actions (1) under some algorithmic influence and (2) without it. The second one distinguishes algorithmic biases on (1) prior information rearrangement and (2) posterior information arrangement. In all cases, we focus on and differentiate situations where algorithms empirically appear to expand the cognitive and social horizon of users, from those where they seem to limit that horizon. We additionally suggest that these biases may not be properly appraised without taking into account the underlying social processes which algorithms are building upon.
Doc 1715 : A Glimpse of Physical Layer Decision Mechanisms: Facts, Challenges, and Remedies
Communications are realized as a result of successive decisions at the physical layer, from modulation selection to multi-antenna strategy, and each decision affects the performance of the communication systems. Future communication systems must include extensive capabilities as they will encompass a wide variety of devices and applications. Conventional physical layer decision mechanisms may not meet these requirements, as they are often based on impractical and oversimplifying assumptions that result in a trade-off between complexity and efficiency. By leveraging past experiences, learning-driven designs are promising solutions to present a resilient decision mechanism and enable rapid response even under exceptional circumstances. The corresponding design solutions should evolve following the lines of learning-driven paradigms that offer more autonomy and robustness. This evolution must take place by considering the facts of real-world systems and without restraining assumptions. In this paper, the common assumptions in the physical layer are presented to highlight their discrepancies with practical systems. As a solution, learning algorithms are examined by considering the implementation steps and challenges. Furthermore, these issues are discussed through a real-time case study using software-defined radio nodes to demonstrate the potential performance improvement. A cyber-physical framework is presented to incorporate future remedies.
Doc 1721 : Methodology for Analyzing the Manageability of Country Communication Modes
Analyzing of communication modes is a new direction not only in Russian, but also in foreign studies. As a result, methodology is just being formed. The article is devoted to the description of the author’s methodology, which allows to identify the types of management and the degree of controllability of communication modes in different countries. The author relies on a systematic, cybernetic approach, on the theory of complex systems, uses the matrix method and the method of comparative analysis. Manageability is defined as a measure of control by the the management center (management entities that make decisions about rules and communication institutions), taking into account the voluntary consent of the objects of management with the level of their autonomy and subordination. The basis of the author’s methodology are three significant parameters: 1) location (inside or outside the country); 2) level of conventionality of the communication regime management centre; 3) assessment of the indicators of manageability. The main criteria of manageability are: the ability of communication mode control centres to transfer it from one state to another without conflict; the ability to achieve the controlled parameters of communication mode; the ability to use the mechanisms of self-organization and self-reflection of control objects to regulate communication modes.
Doc 1726 : Friendly but Faulty: A Pilot Study on the Perceived Trust of Older Adults in a Social Robot
The efforts to promote ageing-in-place of healthy older adults via cybernetic support are fundamental to avoid possible consequences associated with relocation to facilities, including the loss of social ties and autonomy, and feelings of loneliness. This requires an understanding of key factors that affect the involvement of robots in eldercare and the elderly willingness to embrace the robots’ domestic use. Trust is argued to be the main foundation of an effective adult-care provider, which might be more significant if such providers are robots. Establishing, and maintaining trust usually involves two main dimensions: 1) the robot’s reliability (i.e., performance) and 2) the robot’s intrinsic attributes, including its degree of anthropomorphism and benevolence. We conducted a pilot study using a mixed methods approach to explore the extent to which these dimensions and their interaction influenced elderly trust in a humanoid social robot. Using two independent variables, type of attitude (warm, cold) and type of conduct (error, no-error), we aimed to investigate if the older adult participants would trust a purposefully faulty robot when the robot exerted a warm behaviour enhanced with non-functional touch more than a robot that did not, and in what way the robot error affected trust. Lastly, we also investigated the relationship between trust and a proxy variable of actual use of robots (i.e., intention to use robots at home). Given the volatile and context-dependent nature of trust, our close-to real-world scenario of elder-robot interaction involved the administration of health supplements, in which the severity of robot error might have a greater implication on the perceived trust.
Doc 1744 : On Being Transhuman: Commercial BCIs and the Quest for Autonomy
Facebook’s catchphrase “What’s on your mind”, prompting the user to share thoughts with their digital social circles, has gained a new, literal meaning in recent years: targeted advertising, fake news and computational propaganda all being examples of mental ma-nipulation exerted for profit or for power through harnessing AI at scale for the purposes of online profiling. In most cases, this in-volves an elaborate interpretation of one’s digital footprint: the huge amount of data that is generated by our daily online and offline in-teractions and which defines our behaviour. This chapter takes a slightly different approach and seeks to explore the use of AI to re-trieve, analyse and predict data that has not been externalised, yet which most defines us: brain data. There has never been a more promising time in history for delineating the contours of human thought: Public and privately funded projects studying the human brain have produced a high vol-ume of scientific papers and findings in the last few decades, which more often than not are sensationalised in the news. The ambitious plan to explain the mysteries of the human brain has not fully materialised, however ambition drives profit, and therefore the idea of using AI to decode the human brain has been a fast growing commercial venture for many tech giants, who have been investing heavily in corporate R & D neurotech related projects. The chapter proceeds in four parts: Part 1 offers a historical overview of “mind reading” techniques, building up some context as to how the neurotech market boomed and started employing AI to unravel the mysteries of the human brain beyond the clinical sphere. This is then followed by a techno-legal evaluation of the monitoring, collection and analysis of brain imaging data from the use of com-mercial BCIs in Part 2. Building further on this, Part 3 explores the scope for user empowerment and agency in commercial BCI. This will lead to the main argument put forth in Part 4, namely the fact that commercial BCI constitutes a special case that seems to fall through the cracks of robust data protection frameworks, such as the GDPR. In conclusion, the chapter highlights the need for data protection laws to reflect the conceptual redefinition of autonomy in the light of AI- driven pervasive neurotechnologies.