Artykuł
Digital First. The Threats and Hopes Pinned on Digitisation
Thanks to the rapid development of information technology, digitisation and artificial intelligence are no longer science fiction, but practical applications. Solutions of this type can be already found in communications, the military and marketing. Autonomous systems are used in medicine, energy, transportation and education. Will they become a remedy for the ills of the post-industrial era that plague us?
Artificial intelligence is the subject of incessant debate, but despite the lack of expert consensus, it is most often understood as human-designed complex systems that act rationally without supervision, i.e. in a way that makes sense from a human perspective. Klaus Schwab, author of the term “Industrial Revolution 4.0,” argues that algorithm-based solutions will drive the economic advancement of countries and entire regions.
A living being or a technology?
The term “artificial intelligence” was coined by American computer scientist John McCarthy in 1956 at a scientific conference at Dartmouth. He defined it as “the science and engineering of making intelligent machines.” Today’s AI research draws on the theses and accomplishments of computer science, psychology, philosophy, cognitive science, linguistics, economics, probability, and logic. As a result, AI is now defined in two distinct ways. Ontic – as a distinct entity endowed with self-awareness and self-learning capabilities, and praxeological – as a technology that supports humans in areas requiring repetition and precision.