Artificial intelligence (AI) methods are used primarily to perform highly complex tasks, for example for the processing of natural language or the classification of objects in images. The methods not only enable substantially higher levels of automation to be attained, but also open up completely new fields of application.
The term "artificial intelligence" is now used primarily in the context of machine learning, for example in neural networks, decision trees or support vector machines. However, it also covers a large number of other applications, such as expert systems or knowledge graphs.
A feature common to all these methods is their ability to solve problems by modelling concepts that are generally associated with intelligent behaviour. The use of artificial intelligence enables concepts such as learning, planning, sensing, communicating and cooperating to be transferred to technical systems. With these capabilities, completely new intelligent systems and applications can be achieved. Artificial intelligence is therefore often seen as the key technology of the future.
Protective devices and control systems based upon artificial intelligence have already enabled fully automated vehicles and robots to be created. Furthermore, they enable accidents to be prevented by assistive systems capable of recognizing hazardous situations.
However, the use of systems, particularly machines, that are based upon AI methods also changes the physical and mental stresses to which workers are exposed. In order to prevent the use of this technology from giving rise to new hazards, or to reduce such hazards, trustworthy artificial intelligence is required.
The concept of trustworthiness in this sense extends well beyond safety and includes the following essential characteristics:
Owing to the dramatic pace at which modern artificial intelligence methods are developing, foremost among them deep learning, it is however still largely uncertain exactly how these properties can be achieved in a trustworthy artificial intelligence system.
The Institute for Occupational Safety and Health of the German Social Accident Insurance (IFA) is therefore working on concepts by which safety and health at work can not only be maintained when such technology is used, but also enhanced by its use.
Activities for this purpose include the following:
Steimers, A.; Schneider, M.: Sources of Risk of AI Systems (PDF, 811 kB, non-accessible) . Int. J. Environ. Res. Public Health 2022, 19, 3641. https://doi.org/10.3390/ijerph19063641