AI in Society – Potential, Risks, and the Limits of Trust

Authors

DOI:

https://doi.org/10.15584/di.2025.20.4

Keywords:

artificial intelligence, technique, technology, machine learning, industry 4.0, job market

Abstract

The article discusses current issues related to artificial intelligence (AI). So far, no consistent and universally accepted definition of AI has been established within the scientific community. The very term “intelligence” in relation to machines remains controversial, as many aspects of intelligence—such as emotions and consciousness—are still considered uniquely human traits. Moravec points out that while computers excel at complex calculations, they struggle with simple cognitive tasks. AI is a key component of Industry 4.0, which is based on automation and digitalization. However, it brings not only benefits. Numerous AI system failures have already led to accidents and disasters. Automation also impacts the labor market, threatening various professions—both physical and intellectual. In education, AI can be helpful in many cases but may also exacerbate learning-related issues. The phenomenon of “AI hallucinations” is particularly significant in educational contexts. When using AI, one should follow the principle of limited trust, similar to traffic rules. Tools themselves are neither good nor bad—the key lies in how and for what purpose humans use them.

Downloads

Download data is not yet available.

Published

2025-12-31

How to Cite

Piecuch, A. (2025). AI in Society – Potential, Risks, and the Limits of Trust. DIDACTICS OF INFORMATION TECHNOLOGY, 20, 50–67. https://doi.org/10.15584/di.2025.20.4

Issue

Section

ICT AND SOCIETY