AI in Society – Potential, Risks, and the Limits of Trust
DOI:
https://doi.org/10.15584/di.2025.20.4Keywords:
artificial intelligence, technique, technology, machine learning, industry 4.0, job marketAbstract
The article discusses current issues related to artificial intelligence (AI). So far, no consistent and universally accepted definition of AI has been established within the scientific community. The very term “intelligence” in relation to machines remains controversial, as many aspects of intelligence—such as emotions and consciousness—are still considered uniquely human traits. Moravec points out that while computers excel at complex calculations, they struggle with simple cognitive tasks. AI is a key component of Industry 4.0, which is based on automation and digitalization. However, it brings not only benefits. Numerous AI system failures have already led to accidents and disasters. Automation also impacts the labor market, threatening various professions—both physical and intellectual. In education, AI can be helpful in many cases but may also exacerbate learning-related issues. The phenomenon of “AI hallucinations” is particularly significant in educational contexts. When using AI, one should follow the principle of limited trust, similar to traffic rules. Tools themselves are neither good nor bad—the key lies in how and for what purpose humans use them.
Downloads
Downloads
Published
How to Cite
Issue
Section
License
Copyright (c) 2025 DIDACTICS OF INFORMATION TECHNOLOGY

This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.